Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 8 min 17 sec ago

Steve Kemp: If you signed my old key, please consider repeating the process

5 September, 2014 - 01:08

I'm in the process of rejoining the Debian project. When I was previously a member I had a 1024-bit key, which is considered to be a poor size these days.

Happily I've already generated a new key, which is much bigger.

If you've signed my old key, and thus trust my identity was confirmed at some point in time, then please do consider repeating the process with the new one.

As I've signed the new with the old there should be no concern that it is random/spurious/malicious.

Obviously the ideal scenario is that I meet local-people to perform signing rites, in exchange for cake, beer, or other bribery.

Old key:

pub   1024D/CD4C0D9D 2002-05-29
      Key fingerprint = DB1F F3FB 1D08 FC01 ED22  2243 C0CF C6B3 CD4C 0D9D
uid                  Steve Kemp <steve@steve.org.uk>
sub   2048g/AC995563 2002-05-29

New key:

pub   4096R/0C626242 2014-03-24
      Key fingerprint = D516 C42B 1D0E 3F85 4CAB  9723 1909 D408 0C62 6242
uid                  Steve Kemp (Edinburgh, Scotland) <steve@steve.org.uk>
sub   4096R/229A4066 2014-03-24

Joseph Bisch: My First Package

4 September, 2014 - 22:14

I got my first package uploaded to Debian this week. That package is winetricks. It was orphaned and I adopted it. Now the lastest version (0.0+20140818+svn1202) is available in sid and should migrate to testing in nine days.

I moved the vcs from collab-maint to a personal repo, since I don’t have access to collab-maint.

I also have a sponsor for slowaes. It is also a package that was orphaned that I am adopting. The changes I made are more minor than those for winetricks. Besides adding myself as a maintainer, I just fix some lintian warnings. Slowaes should be uploaded soon.

Jakub Wilk: Joys of East Asian encodings

4 September, 2014 - 20:01

In i18nspector I try to support all the encodings that were blessed by gettext, but it turns out to be more difficult than I anticipated:

$ roundtrip() { c=$(echo $1 | iconv -t $2); printf '%s -> %s -> %s\n' $1 $c $(echo $c | iconv -f "$2"); }

$ roundtrip ¥ EUC-JP
¥ -> \ -> \

$ roundtrip ¥ SHIFT_JIS
¥ -> \ -> ¥

$ roundtrip ₩ JOHAB
₩ -> \ -> ₩

Now let's do the same in Python:

$ python3 -q
>>> roundtrip = lambda s, e: print('%s -> %s -> %s' % (s, s.encode(e).decode('ASCII', 'replace'), s.encode(e).decode(e)))
>>> roundtrip('¥', 'EUC-JP')
¥ -> \ -> \
>>> roundtrip('¥', 'SHIFT_JIS')
¥ -> \ -> \
>>> roundtrip('₩', 'JOHAB')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 1, in <lambda>
UnicodeEncodeError: 'johab' codec can't encode character '\u20a9' in position 0: illegal multibyte sequence

So is 0x5C a backslash or a yen/won sign? Or both?

And what if 0x5C could be a second byte of a two-byte character? What could possibly go wrong?

Adnan Hodzic: Debian PPA Utility

4 September, 2014 - 19:55

Debian remains to be my favorite distribution, however there’s one thing that’s missing, that thing is called PPA.

There were numerous discussions on this topic inside of Debian, but AFAIK without any visible movement. Thus, I decided to publish a utility I’ve been using for some time now.

PPA’s

Since its introduction, PPA’s are exclusively connected to Ubuntu and its derivatives (Mint, Elementary, etc …). But over time, a number of interesting projects appeared whose whole development is happening inside of PPA’s. To name few, I’m talking about TLP, Geary, Oracle Java Installer, Elementary OS and etc … Some of these projects are in WNPP without much happening for a long time, i.e: TLP

One option was to repackage these packages and then have them uploaded to Debian, or just go rogue and install them directly from its PPA’s. Title of this post might hint which path I took.

In theory, adding Ubuntu packages on your Debian system is a bad idea, and adding its PPA’s is probably even worse. But, I’ve been using couple (TLP, Geary, couple of custom icon sets) of these PPA’s on my personal/work boxes, and to be honest, never had a single problem.

Most of the PPA’s I use, are usually fairly simple packages with single binary and dependencies which are found in Debian itself. Of course, I don’t recommend adding PPA’s on production boxes, or even PPA’s such as GNOME3 Team PPA’s, but rather add APT Pinning on your system and fetch those packages directly from Debian.

Debian PPA Utility

Is a very simple utility, which adds “add-apt-repository” binary that allows you to add PPA’s on Debian. Code is available on GitHub, it’s licensed as GPLv3, so feel free to fork it, improve it, use it and abuse it.

How to use it?

Download/Build package

You can download my signed package (source and changes file are in same directory)

Or you can build your own by running "dpkg-buildpackage -uc -us" inside of the debian-ppa source directory.

Install/Add PPA’s

After you install the package, you’re able to run “add-apt-repository” and add PPA’s. i,e:

sudo add-apt-repository ppa:linrunner/tlp

Currently, Debian PPA Utility only works on >= Wheezy.

At this point I have no plans to try pushing this utility into Debian, as I’m sure even this blog post will be labelled as heresy by many.

Update!

It was just pointed to me that “add-apt-repository” is available in “software-properties-common” package. However, PPA’s added by “add-apt-repository”  binary present in this package instead of adding Ubuntu codename’s to your list file, will add Debian codenames which without change will make whole PPA entry useless.

I believe codename handling is better in “Debian PPA Utility”. I admit, my only mistake is, instead of fixing things in “software-properties-common” package, I made a completely new utility which aims to do the same thing.

Added: conflicts/replaces: software-properties-common to debian/control file.

Anyway, enjoy!

Rapha&#235;l Hertzog: The problem of distributing applications

4 September, 2014 - 16:29

A few days ago I watched a Q/A session with Linus Torvalds at Debconf 14. One of the main complaint of Linus towards Linux distribution was the way that distribution ends up using different versions of libraries than what has been used during application development. And the fact that it’s next to impossible to support properly all Linux distributions at the same time due to this kind of differences.


And now I just discovered a new proposal of the systemd team that basically tries to address this: Revisiting how we put together Linux Systems.

They suggest to make extensive use of btrfs subvolumes to host multiple variants of the /usr tree (that is supposed to contain all the invariant system code/data) that you could combine with multiple runtime/framework subvolumes thanks to filesytem namespaces and make available to individual applications.

This way of grouping libraries in “runtime subvolumes” reminds me a bit of the concepts of baserock (they are using git instead of btrfs) and while I was a bit dubious of all this (because it goes against quite a few of the principles of distribution integration) I’m beginning to believe that there’s room for both models to work together.

It would be nice if Debian could become the reference distribution that upstream developers are using to develop against Linux. This would in turn mean that when upstream distribution their application under this new form, they will provide (or reference) Debian-based subvolumes ready for use by users (even those who are not using Debian as their main OS). And those subvolumes would be managed by the Debian project (probably automatically built from our collection of .deb).

We’re still quite far from this goal but it will interesting to see this idea mature and become reality. There are plenty of challenges facing us.

6 comments | Liked this article? Click here. | My blog is Flattr-enabled.

Russell Coker: Inteltech/Clicksend SMS Script

4 September, 2014 - 12:19

USER=username
API_KEY=1234ABC
OUTPUTDIR=/var/spool/sms
LOG_SERVICE=local1

I’ve just written the below script to send SMS via the inteltech.com/clicksend.com service. It takes the above configuration in /etc/sms-pass.cfg where the username is assigned with the clicksend web page and the API key is a long hex string that clicksend provides as a password. The LOG_SERVICE is which syslog service to use for the log messages, on systems that are expected to send many messages I use the service “local1″ and I use “user” for development systems.

I hope this is useful to someone, and if you have any ideas for improvement then please let me know.

#!/bin/sh
# $1 is destination number
# text is on standard input
# standard output gives message ID on success, and 0 is returned
# standard error gives error from server on failure, and 1 is returned

. /etc/sms-pass.cfg
OUTPUT=$OUTPUTDIR/out.$$
TEXT=`tr "[:space:]" + | cut -c 1-159`

logger -t sms -p $LOG_SERVICE.info "sending message to $1"
wget -O $OUTPUT "https://api.clicksend.com/http/v2/send.php?method=http&username=$USER&key=$API_KEY&to=$1&message=$TEXT" > /dev/null 2> /dev/null

if [ "$?" != "0" ]; then
  echo "Error running wget" >&2
  logger -t sms -p $LOG_SERVICE.err "failed to send message \"$TEXT\" to $1 – wget error"
  exit 1
fi

if ! grep -q ^.errortext.Success $OUTPUT ; then
  cat $OUTPUT >&2
  echo >&2
  ERR=$(grep ^.errortext $OUTPUT | sed -e s/^.errortext.// -e s/..errortext.$//)
  logger -t sms -p $LOG_SERVICE.err "failed to send message \"$TEXT\" to $1 – $ERR"
  rm $OUTPUT
  exit 1
fi

ID=$(grep ^.messageid $OUTPUT | sed -e s/^.messageid.// -e s/..messageid.$//)
rm $OUTPUT

logger -t sms -p $LOG_SERVICE.info "sent message to $1 with ID $ID"
exit 0

Related posts:

  1. Parsing Daemontools/Multilog dates in Shell Script I run some servers that use the DJB Daemontools to...

Steve Kemp: systemd, a brave new world

4 September, 2014 - 09:47

After spending a while fighting with upstart, at work, I decided that systemd couldn't be any worse and yesterday morning upgraded one of my servers to run it.

I have two classes of servers:

  • Those that run standard daemons, with nothing special.
  • Those that run different services under runit
    • For example docker guests, node.js applications, and similar.

I thought it would be a fair test to upgrade one of each systems, to see how it worked.

The Debian wiki has instructions for installing Systemd, and both systems came up just fine.

Although I realize I should replace my current runit jobs with systemd units I didn't want to do that. So I wrote a systemd .service file to launch runit against /etc/service, as expected, and that was fine.

Docker was a special case. I wrote a docker.service + docker.socket file to launch the deamon, but when I wrote a graphite.service file to start a docker instance it kept on restarting, or failing to stop.

In short I couldn't use systemd to manage running a docker guest, but that was probably user-error. For the moment the docker-host has a shell script in root's home directory to launch the guest:

#!/bin/sh
#
# Run Graphite in a detached state.
#
/usr/bin/docker run -d -t -i -p 8080:80 -p 2003:2003 skxskx/graphite

Without getting into politics (ha), systemd installation seemed simple, resulted in a faster boot, and didn't cause me horrific problems. Yet.

ObRandom: Not sure how systemd is controlling prosody, for example. If I run the status command I can see it is using the legacy system:

root@chat ~ # systemctl status prosody.service 
prosody.service - LSB: Prosody XMPP Server
      Loaded: loaded (/etc/init.d/prosody)
      Active: active (running) since Wed, 03 Sep 2014 07:59:44 +0100; 18h ago
      CGroup: name=systemd:/system/prosody.service
          └ 942 lua5.1 /usr/bin/prosody

I've installed systemd and systemd-sysv, so I thought /etc/init.d was obsolete. I guess it is making pretend-services for things it doesn't know about (because obviously not all packages contain /lib/systemd/system entries), but I'm unsure how that works.

Stefano Zacchiroli: interview for the gnu linux setup

3 September, 2014 - 16:48
my setup, take #1

Among the various things I've catched up with during the summer, I've finally managed to set aside some time to answer a pending interview request for The [GNU/]Linux Setup: a blog run by Steven Ovadia that collects interviews about how people use GNU/Linux-based desktops.

In the interview I discuss my day to day work-flow, from GNOME Shell to Mutt, from Emacs to Notmuch, and the various glue code tools I've written for integrating them.

Enjoy!

Feedback is most welcome.

Junichi Uekawa: Can't link qemu with static.

3 September, 2014 - 16:04
Can't link qemu with static. I was puzzled but this looks wrong that pkg-config libssh2 doesn't output -lgpg-error. gcrypt depends on gpg-error.

Dirk Eddelbuettel: littler is faster at doing nothing!

3 September, 2014 - 10:25

With yesterday's announcement of littler 0.2.0, I kept thinking about a few not-so-frequently-asked but recurring questions about littler. And an obvious one if of course the relationship to Rscript.

As we have pointed out before, littler preceded Rscript. Now, with Rscript being present in every R installation, it is of course by now more widely known.

But there is one important aspect which I stressed once or twice in the past and which bears repeating. Due to the lean and mean way in which littler is designed and set-up, it actually starts a lot faster than either Rscript or R. How, you ask? Well we actually query the environement at build time and hardcode a number of settings which R and Rscript re-acquire and learn each time they start. Which is more flexible. But slower.

So consider the following, really simple example. In it, we create a simple worker function f which launches the given (and simplest possible) R command of just quitting 250 times (by launching a shell command which loops). We then let littler, Rscript and R (in its just execute this expression mode) run this, and time it via one of the common benchmarking packages.

R> library(rbenchmark)
R> f <- function(cmd) { system(paste0("bash -c \"for i in \\$(seq 1 250); do ", cmd, " -e 'q()'; done\"")); }
R> res <- benchmark(f("r"), f("Rscript"), f("R --slave"), replications=5, order="relative")
R> res
            test replications elapsed relative user.self sys.self user.child sys.child
1         f("r")            5  32.256    1.000     0.001    0.004     26.538     5.706
2   f("Rscript")            5 180.154    5.585     0.001    0.004    152.751    28.522
3 f("R --slave")            5 302.714    9.385     0.001    0.004    270.569    33.098
R> 

So there: littler does "nothing" about five times as fast as Rscript, and about nine times as fast as R. Not that this matters greatly -- but when you design something for repeated (unsupervised) execution, say in a cronjob, it might as well be lightweight.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Mario Lang: exercism.io C++ track

3 September, 2014 - 04:20

exercism.io is a croud-sourced mentorship platform for learning to program. In my opinion, they do a lot of things right. In particular, an exercise on exercism.io consists of a descriptive README file and a set of test cases implemented in the target programming language. The tests have two positive sides: You learn to do test-driven development, which is good. And you also have an automated validation suite. Of course, a test can not give you feedback on your actual implementation, but at least it can give you an idea if you have managed to implement what was required of you. But that is not the end of it. Once you have submitted a solution to a particular exercise, other users of exercism.io can comment on your implementation. And you can, as soon as you have submitted the first implementation, look at the solutions that other people have submitted to that particular problem. So knowledge transfer can happen both ways from there on: You can learn new things from how other people have solved the same problem, and you can also tell other people about things they might have done in a different way. These comments are, somewhat appropriately, called nitpicks on exercism.io.

Now, exercism has recently gained a C++ track. That track is particularily fun, because it is based on C++11, Boost, and CMake. Things that are quite standard to C++ development these days. And the use of C++11 and Boost makes some solutions really shine.

Ian Campbell: Becoming A Debian Developer

3 September, 2014 - 01:58

After becoming a DM at Debconf12 in Managua, Nicaragua and entering the NM queue during Debconf13 in Vaumarcus, Switzerland I received the mail about 24 hours too late to officially become a DD during Debconf14 in Portland, USA. Nevertheless it was a very pleasant surprise to find the mail in my INBOX this morning confirming that my account had been created and that I was officially ijc@debian.org. Thanks to everyone who helped/encouraged me along the way!

I don't imagine much will change in practice, I intend to remain involved in the kernel and Debian Installer efforts as well as continuing to contribute to the Xen packaging and to maintain qcontrol (both in Debian and upstream) and sunxi-tools. I suppose I also still maintain ivtv-utils and xserver-xorg-video-ivtv but they require so little in the way of updates that I'm not sure they count.

Rapha&#235;l Hertzog: My Free Software Activities in August 2014

2 September, 2014 - 23:58

This is my monthly summary of my free software related activities. If you’re among the people who made a donation to support my work (65.55 €, thanks everybody!), then you can learn how I spent your money. Otherwise it’s just an interesting status update on my various projects.

Distro Tracker

Even though I was officially in vacation during 3 of the 4 weeks of August, I spent many nights working on Distro Tracker. I’m pleased to have managed to bring back Python 3 compatibility over all the (tested) code base. The full test suite now passes with Python 3.4 and Django 1.6 (or 1.7).

From now on, I’ll run “tox” on all code submitted to make sure that we won’t regress on this point. tox also runs flake8 for me so that I can easily detect when the submitted code doesn’t respect the PEP8 coding style. It also catches other interesting mistakes (like unused variable or too complex functions).

Getting the code to pass flake8 was also a major effort, it resulted in a huge commit (89 files changed, 1763 insertions, 1176 deletions).

Thanks to the extensive test suite, all those refactoring only resulted in two regressions that I fixed rather quickly.

Some statistics: 51 commits over the last month, 41 by me, 3 by Andrew Starr-Bochicchio, 3 by Christophe Siraut, 3 by Joseph Herlant and 1 by Simon Kainz. Thanks to all of them! Their contributions ported some features that were already available on the old PTS. The new PTS is now warning of upcoming auto-removals, is displaying problems with uptream URLs, includes a short package description in the page title, and provides a link to screenshots (if they exist on screenshots.debian.net).

We still have plenty of bugs to handle, so you can help too: check out https://tracker.debian.org/docs/contributing.html. I always leave easy bugs for others to handle, so grab one and get started! I’ll review your patch with pleasure.

Tryton

After my last batch of contributions to Tryton’s French Chart of Accounts (#4108, #4109, #4110, #4111) Cédric Krier granted me commit rights to the account_fr mercurial module.

Debconf 14

I wasn’t able to attend this year but thanks to awesome work of the video team, I watched some videos (and I still have a bunch that I want to see). Some of them were put online the day after they had been recorded. Really amazing work!

Django 1.7

After the initial bug reports, I got some feedback of maintainers who feared that it would be difficult to get their packages working with Django 1.7. I helped them as best as I can by providing some patches (for horizon, for django-restricted-resource, for django-testscenarios).

Since I expected many maintainers to be not very pro-active, I rebuilt all packages with Django 1.7 to detect at least those that would fail to build. I tagged as confirmed all the corresponding bug reports.

Looking at https://bugs.debian.org/cgi-bin/pkgreport.cgi?users=python-django@packages.debian.org;tag=django17, one can see that some progress has been made with 25 packages fixed. Still there are at least 25 others that are still problematic in sid and 35 that have not been investigated at all (except for the automatic rebuild that passed). Again your help is more than welcome!

It’s easy to install python-django 1.7 from experimental and they try to use/rebuild the packages from the above list.

Dpkg translation

With the freeze approaching, I wanted to ensure that dpkg was fully translated in French. I thus pinged debian-l10n-french@lists.debian.org and merged some translations that were done by volunteers. Unfortunately it looks like nobody really stepped up to maintain it in the long run… so I did myself the required update when dpkg 1.17.12 got uploaded.

Is there anyone willing to manage dpkg’s French translation? With the latest changes in 1.17.13, we have again a few untranslated strings:
$ for i in $(find . -name fr.po); do echo $i; msgfmt -c -o /dev/null --statistics $i; done
./po/fr.po
1083 translated messages, 4 fuzzy translations, 1 untranslated message.
./dselect/po/fr.po
268 translated messages, 3 fuzzy translations.
./scripts/po/fr.po
545 translated messages.
./man/po/fr.po
2277 translated messages, 8 fuzzy translations, 3 untranslated messages.

Misc stuff

I made an xsane QA upload (it’s currently orphaned) to drop the (build-)dependency on liblcms1 and avoid getting it removed from Debian testing (see #745524). For the record, how-can-i-help warned me of this after one dist-upgrade.

With the Django 1.7 work and the need to open up an experimental branch, I decided to switch python-django’s packaging to git even though the current team policy is to use subversion. This triggered (once more) the discussion about a possible switch to git and I was pleased to see more enthusiasm this time around. Barry Warsaw tested a few workflows, shared his feeling and pushed toward a live discussion of the switch during Debconf. It looks like it might happen for good this time. I contributed my share in the discussions on the mailing list.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Daniel Baumann: Public Perception

2 September, 2014 - 18:11

I noticed at DebConf 14 that people still think I would maintain many packages in Debian. This is not true for several years.

Here is to clarify once and for all – I maintain 17 packages:

Not more, not less, and I also do not intend to upload any new packages to Debian.


Antonio Terceiro: DebConf 14: Community, Debian CI, Ruby, Redmine, and Noosfero

2 September, 2014 - 09:46

This time, for personal reasons I wasn’t able to attend the full DebConf, which started on the Saturday August 22nd. I arrived at Portland on the Tuesday the 26th by noon, at the 4th of the conference. Even though I would like to arrive earlier, the loss was alleviated by the work of the amazing DebConf video team. I was able to follow remotely most of the sessions I would like to attend if I were there already.

As I will say to everyone, DebConf is for sure the best conference I have ever attended. The technical and philosophical discussions that take place in talks, BoF sessions or even unplanned ad-hoc gathering are deep. The hacking moments where you have a chance to pair with fellow developers, with whom you usually only have contact remotely via IRC or email, are precious.

That is all great. But definitively, catching up with old friends, and making new ones, is what makes DebConf so special. Your old friends are your old friends, and meeting them again after so much time is always a pleasure. New friendships will already start with a powerful bond, which is being part of the Debian community.

Being only 4 hours behind my home time zone, jetlag wasn’t a big problem during the day. However, I was waking up too early in the morning and consequently getting tired very early at night, so I mostly didn’t go out after hacklabs were closed at 10PM.

Despite all of the discussion, being in the audience for several talks, other social interactions and whatnot, during this DebConf I have managed to do quite some useful work.

debci and the Debian Continuous Integration project

I gave a talk where I discussed past, present, and future of debci and the Debian Continuous Integration project. The slides are available, as well as the video recording. One thing I want you to take away is that there is a difference between debci and the Debian Continuous Integration project:

  • debci is a platform for Continuous Integration specifically tailored for the Debian repository and similar ones. If you work on a Debian derivative, or otherwise provides Debian packages in a repository, you can use debci to run tests for your stuff.
    • a (very) few thinks in debci, though, are currently hardcoded for Debian. Other projects using it would be a nice and needed peer pressure to get rid of those.
  • Debian Continuous Integration is Debian’s instance of debci, which currently runs tests for all packages in the unstable distribution that provide autopkgtest support. It will be expanded in the future to run tests on other suites and architectures.

A few days before DebConf, Cédric Boutillier managed to extract gem2deb-test-runner from gem2deb, so that autopkgtest tests can be run against any Ruby package that has tests by running gem2deb-test-runner --autopkgtest. gem2deb-test-runner will do the right thing, make sure that the tests don’t use code from the source package, but instead run them against the installed package.

Then, right after my talk I was glad to discover that the Perl team is also working on a similar tool that will automate running tests for their packages against the installed package. We agreed that they will send me a whitelist of packages in which we could just call that tool and have it do The Right Thing.

We might be talking here about getting autopkgtest support (and consequentially continuous integration) for free for almost 2000 packages. The missing bits for this to happen are:

  • making debci use a whitelist of packages that, while not having the appropriate Testsuite: autopkgtest field in the Sources file, could be assumed to have autopkgtest support by calling the right tool (gem2deb-test-runner for Ruby, or the Perl team’s new tool for Perl packages).
  • make the autopkgtest test runner assume a corresponding, implicit, debian/tests/control when it not exists in those packages

During a few days I have mentored Lucas Kanashiro, who also attended DebConf, on writing a patch to add support for email notifications in debci so maintainers can be pro-actively notified of status changes (pass/fail, fail/pass) in their packages.

I have also started hacking on the support for distributed workers, based on the initial work by Martin Pitt:

  • updated the amqp branch against the code in the master branch.
  • added a debci enqueue command that can be used to force test runs for packages given on the command line.
  • I sent a patch for librabbitmq that adds support for limiting the number of messages the server will send to a connected client. With this patch applied, the debci workers were modified to request being sent only 1 message at a time, so late workers will start to receive packages to process as soon as they are up. Without this, a single connected worker would receive all messages right away, while a second worker that comes up 1 second later would sit idle until new packages are queued for testing.
Ruby

I had some discusion with Christian about making Rubygems install to $HOME by default when the user is not root. We discussed a few implementation options, and while I don’t have a solution yet, we have a better understanding of the potential pitfalls.

The Ruby BoF session on Friday produced a few interesting discussions. Some take away point include, but are not limited to:

  • Since the last DebConf, we were able to remove all obsolete Ruby interpreters, and now only have Ruby 2.1 in unstable. Ruby 2.1 will be the default version in Debian 8 (jessie).
  • There is user interest is being able to run the interpreter from Debian, but install everything else from Rubygems.
  • We are lacking in all the documentation-related goals for jessie that were proposed at the previous DebConf.
Redmine

I was able to make Redmine work with the Rails 4 stack we currently have in unstable/testing. This required using a snapshot of the still unreleased version 3.0 based on the rails-4.1 branch in the upstream Subversion repository as source.

I am a little nervous about using a upstream snapshot, though. According to the "roadmap of the project ":http://www.redmine.org/projects/redmine/roadmap the only purpose of the 3.0 release will be to upgrade to Rails 4, but before that happens there should be a 2.6.0 release that is also not released yet. 3.0 should be equivalent to that 2.6.0 version both feature-wise and, specially, bug-wise. The only problem is that we don’t know what that 2.6.0 looks like yet. According to the roadmap it seems there is not much left in term of features for 2.6.0, though.

The updated package is not in unstable yet, but will be soon. It needs more testing, and a good update to the documentation. Those interested in helping to test Redmine on jessie before the freeze please get in touch with me.

Noosfero

I gave a lighting talk on Noosfero, a platform for social networking websites I am upstream for. It is a Rails appplication licensed under the AGPLv3, and there are packages for wheezy. You can checkout the slides I used. Video recording is not available yet, but should be soon.

That’s it. I am looking forward to DebConf 15 at Heidelberg. :-)

Lars Wirzenius: 45

2 September, 2014 - 03:28
  1. I should stop being childish, but I don't wanna.

Dirk Eddelbuettel: littler 0.2.0

2 September, 2014 - 02:13
We are happy to announce a new release of littler.

A few minor things have changes since the last release:
  • A few new examples were added or updated, including use of the fabulous new docopt package by Edwin de Jonge which makes command-line parsing a breeze.
  • Other new examples show simple calls to help with sweave, knitr, roxygen2, Rcpp's attribute compilation, and more.
  • We also wrote an entirely new webpage with usage example.
  • A new option -d | --datastdin was added which will read stdin into a data.frame variable X.
  • The repository has been move to this GitHub repo.
  • With that, the build process was updated both throughout but also to reflect the current git commit at time of build.

Full details are provided at the ChangeLog

The code is available via the GitHub repo, from tarballs off my littler page and the local directory here. A fresh package will got to Debian's incoming queue shortly as well.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Joseph Bisch: Debconf Wrapup

2 September, 2014 - 02:02

Debconf14 was the first Debconf I attended. It was an awesome experience.

Debconf14 started with a Meet and Greet before the Welcome Talk. I got to meet people and find out what they do for Debian. I also got to meet other GSoC students that I had only previously interacted with online. During the Meet and Greet I also met one of my mentors for GSoC, Zack. Later in the conference I met another of my mentors, Piotr. Previously I only interacted with Zack and Piotr online.

On Monday we had the OpenPGP Keysigning. I got to meet people and exchange information so that we could later sign keys. Then on Tuesday I gave my talk about debmetrics as part of the larger GSoC talks.

During the conference I mostly attended talks. Then on Wednesday we had the daytrip. I went hiking at Multnomah Falls, had lunch at Rooster Rock State Park, and then went to Vista House.

Later in the conference, Zack and I did some work on debmetrics. We looked at the tests, which had some issues. I was able to fix most of the issues with the tests while I was there at Debconf. We also moved the debmetrics repository under the qa category of repositories. Previously it was a private repository.

Jo Shields: Xamarin Apt and Yum repos now open for testing

2 September, 2014 - 00:46

Howdy y’all

Two of the main things I’ve been working on since I started at Xamarin are making it easier for people to try out the latest bleeding-edge Mono, and making it easier for people on older distributions to upgrade Mono without upgrading their entire OS.

Public Jenkins packages

Every time anyone commits to Mono git master or MonoDevelop git master, our public Jenkins will try and turn those into packages, and add them to repositories. There’s a garbage collection policy – currently the 20 most recent builds are always kept, then the first build of the month for everything older than 20 builds.

Because we’re talking potentially broken packages here, I wrote a simple environment mangling script called mono-snapshot. When you install a Jenkins package, mono-snapshot will also be installed and configured. This allows you to have multiple Mono versions installed at once, for easy bug bisecting.

directhex@marceline:~$ mono --version
Mono JIT compiler version 3.6.0 (tarball Wed Aug 20 13:05:36 UTC 2014)
directhex@marceline:~$ . mono-snapshot mono
[mono-20140828234844]directhex@marceline:~$ mono --version
Mono JIT compiler version 3.8.1 (tarball Fri Aug 29 07:11:20 UTC 2014)

The instructions for setting up the Jenkins packages are on the new Mono web site, specifically here. The packages are built on CentOS 7 x64, Debian 7 x64, and Debian 7 i386 – they should work on most newer distributions or derivatives.

Stable release packages

This has taken a bit longer to get working. The aim is to offer packages in our Apt/Yum repositories for every Mono release, in a timely fashion, more or less around the same time as the Mac installers are released. Info for setting this up is, again, on the new website.

Like the Jenkins packages, they are designed as far as I am able to cleanly integrate with different versions of major popular distributions – though there are a few instances of ABI breakage in there which I have opted to fix using one evil method rather than another evil method.

Please note that these are still at “preview” or “beta” quality, and shouldn’t be considered usable in major production environments until I get a bit more user feedback. The RPM packages especially are super new, and I haven’t tested them exhaustively at this point – I’d welcome feedback.

I hope to remove the “testing!!!” warning labels from these packages soon, but that relies on user feedback to my xamarin.com account preferably (jo.shields@)

Juliana Louback: Debconf 2014 and How I Became a Debian Contributor

1 September, 2014 - 18:06

Part 1 - Debconf 2014

This year I went to my first Debconf, which took place in Portland, OR during the last week of August 2014. All in all I have to rate my experience as very enlightening and in the end quite fun.

First of all, it was a little daunting to go to a conference in 1 - A city I’d never been to before; 2 - A conference with 300+ people, only 3 of which I knew and even then I only knew them virtually. Not to mention I was in the presence of some extremely brilliant and known contirbutors in the Debian community which was somewhat intimidating. Just to give you an idea, Linus Torvalds showed up for a Q&A session last Friday morning! Jealous? Actually I missed that too. It was kind of a last minute thing, booked for coincidentally the exact time I’d be flying out of Portland. I found out about it much too late. But luckily for me and maybe you, the session was filmed and can be seen here. Isn’t that a treat?

Point made, there are lots of really talented people there, both techies and non-techies. It’s easy to feel you’re out of league, at least I did. But I’d highly encourage you to ignore such feelings if you’re ever in the same situation. Debian has been built on for a long time now, but although a lot has been done, a lot still needs to be done. The Debian community is very welcoming of new contributors and users, regardless of the level of expertise. So far I haven’t been snubbed by anyone. To the contrary, all my interactions with the Debian community members has been extremely positive.

So go ahead and attend the meetings and presentations, even if you think it’s not your area of expertise. Debconf was organized (or at least this one was) as a series of talks, meet ups and ad hoc sessions, some of which occured simultaneously. The sessions were all about different components of the Debian universe, from presenting new features to overviews of accomplishments to discussing issues and how to fix them. A schedule with the location and description of each session was posted on the Debconf wiki. Sometimes none of the sessions at a certain time was on a topic I knew very much about. But I’d sit in anyways. There’s no rule to attending the sessions, no ‘minimum qualifications’ required. You’ll likely learn something new and you just might find out there is something you can do to contribute. There are also hackathons that are quite the thing or so I heard. Or you could walk about and meet new people, do some networking.

I have to say networking was the highlight of the Debconf for me. Remember I said I knew about 3 people who were at the conference? Well, I had actually just corresponded with those people. I didn’t really know them. So on my first day I spent quite some time shyly peeking at people’s name tags, trying to recognize someone I had ‘met’ over email or IRC. But with 300 or so people at the conference, I was unsuccessful. So I finally gave up on that strategy and walked up to a random person, stuck out my hand and said, “Hi. My name is Juliana. This is my first Debconf. What’s your name and what do you do for Debian?” This may not be according to protocol, but it worked for me. I got to meet lots of people that way, met some Debian contributos from my home country (Brazil), some from my current city (NYC), and yet others that had similar interests as I do who I might work with in the near future. For example, I love Machine Learning, I’m currently beginning my graduate studies on that track. Several Debian contributors offered to introduce me to a well known Machine Learning researcher and Debian contributor who is in NYC. Others had tried out JSCommunicator and had lots of suggestions for new features and fixes, or wanted to know more about the project and WebRTC in general. Also, not everyone there is a super experienced Debian contributor or user. There are a lot of newbies like me.

I got to do a quick 20-min presentation and demo of the work I had done on JSCommunicator during GSoC 2014. Oh my goodness that was nerve-wracking, but not half as painful as I expected. My mentor (Daniel Pocock) wisely suggested that when confronted with a question I didn’t know how to answer, to redirect the question to to the audience. Chances are, there is someone there that knows the answer. If not, it will at least spark a good discussion.

When meeting new people at Debian, a question almost everyone asked is “How did you start working with/for Debian?”. So I thought it would be a good topic to post about.

Part 2 - How I Became a Debian Contributor

Sometime in late October of 2013 (I think) I received an email from one of my professors at UNIRIO forwarding a description of the Outreach Program for Women. OPW is a program organized by the GNOME which endeavors to get more women involved in FOSS. OPW is similar to Google Summer of Code; you work remotely from home, guided by an assigned mentor. Debian was one of the 8 participating organizations that year. There was a list of project proposals which I perused, a few of them caught my eye and these projects were all Debian. I’d already been a fan of FOSS before. I had used the Ubuntu and Debian OS, I’d migrated to GIMP from Photoshop and Open Office from Microsoft Office, for example. I’d strongly advocated the use of some of my prefered open source apps and programs to my friends and family. But I hadn’t ever contributed to a FOSS project.

There’s no time like the present, so I reached out the the mentor responsible for one of the projects I was interested in, Daniel Pocock. Daniel guided me through making a small contribution to a FOSS project, which serves as a token demonstration of my abilities and is part of the application process. I added a small feature to JMXetric(https://github.com/ganglia/jmxetric) and suggested a fix for an issue in the xTuple project. Actually, I had forgotten about this. Recently I made another contribution to xTuple, it’s funny to see things come full circle. I also had to write a profile-ish description of my experience and how I intended on contributing during OPW on the Debian wiki, if you’d like you can check it out here.

I wouldn’t follow my example to a T, because in the end I didn’t make the OPW selection. Actually, I take that back. The fact I wasn’t chosen for OPW that year doesn’t mean I was incompetent or incapable of making a valuable contribution. OPW and GSoC do not have unlimited resources, they can’t include everyone they’d like to. They receive thousands of proposals from very talented engineers and not everyone can participate at a given moment. But even though I wasn’t selected, like I said, I could still pitch in. It’s good to keep in mind that people usually aren’t paid to contribute to FOSS. It’s usually volunteer based, which I think is one of the beauties of the FOSS community and in my opinion one of the causes of it’s success and great quality. People contribute because they want to, not because they have to.

I will say I was a little disappointed at not being chosen. But after being reassured that this ‘rejection’ wasn’t due to any lack on my part, I decided to continue contributing to the Debian project I’d applied to. I was begining the final semester of my undergraduate studies which included writing a thesis. To be able to focus on my thesis and graduate on time, I’d stopped working and was studying full time. But I didn’t want to lose practice and contributing to a FOSS project is a great way to stay in coding shape while doing something useful. So continue contributing I did.

It paid off. I gained experience, added value to a FOSS project and I think my previous contributions added weight to the application I later made for GSoC 2014. I passed this time. To be honest, I really wasn’t counting on it. Actually, I was certain I wouldn’t pass for some reason - insecure much? But with GSoC I wasn’t too anxious about it as I was with the OPW application because by then, I was already ‘hooked’. I’d learned about all the benefits of becoming a FOSS contributor and I wasn’t stopping anytime soon. I had every intention of still working on my FOSS project with or without GSoC. GSoC 2014 ended a week ago (August 18th 2014). There’s a list of things I still want to do with JSCommunicator and you can be sure I’ll keep working on them.

P.S. This is not to say that programs like OPW and GSoC aren’t amazing programs. Try it out if you can, it’s really a great experience.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้