Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 5 min ago

Joey Hess: snowdrift - sustainable crowdfunding for free software development

2 December, 2014 - 01:36

In a recent blog post, I mentioned how lucky I feel to keep finding ways to work on free software. In the past couple years, I've had a successful Kickstarter, and followed that up with a second crowdfunding campaign, and now a grant is funding my work. A lot to be thankful for.

A one-off crowdfunding campaign to fund free software development is wonderful, if you can pull it off. It can start a new project, or kick an existing one into a higher gear. But in many ways, free software development is a poor match for kickstarter-type crowdfunding. Especially when it comes to ongoing development, which it's really hard to do a crowdfunding pitch for. That's why I was excited to find Snowdrift.coop, which has a unique approach.

Imagine going to a web page for a free software project that you care about, and seeing this button:

That's a lot stronger incentive than some paypal dontation button or flattr link! The details of how it works are explained on their intro page, or see the ever-insightful and thoughtful Mike Linksvayer's blog post about it.

When I found out about this, I immediately sent them a one-off donation. Later, I got to meet one of the developers face to face in Portland. I've also done a small amount of work on the Snowdrift platform, which is itself free software. (My haskell code will actually render that button above!)

Free software is important, and its funding should be based, not on how lucky or good we are at kickstarter pitches, but on its quality and how useful it is to everyone. Snowdrift is the most interesting thing I've seen in this space, and I really hope they succeed. If you agree, they're running their own crowdfunding campaign right now.

Diego Escalante Urrelo: Airport hack: Where to eat

2 December, 2014 - 01:31

When in an airport, and not sure where to eat, look for the place where you can find the most airport workers. They did the research for you already.

Olivier Berger: Offline backup/mirror of a Moodle course, using httrack

1 December, 2014 - 17:09

I havent’ found much details online on how to perform a Moodle course mirror that could be browsed offline using httrack.

This could be useful both for backup purposes, or for distant learners with connectivity issues.

In my case, there’s a login/password dialog that grants access to the moodle platform, which can be processed by httrack by capturing the POST form results using the “catchurl” option.

The strategy I’ve used is to add filters so that everything is excluded and only explicitely mentioned filters are then allowed to be mirrored. This allows to perform the backup connected with a user that may have high privileges, while avoiding to disappear in loops or complex links following for UI rendering variants of Moodle’s interface.

Here’s an example command line :
httrack -v -z -%F "Mirrored [from host %s [file %s [at %s]]]" -N "%h%p/%n%[id].%t" "http://mymoodle.example.com/login/index.php?>postfile:/home/myself/websites/mycourse/hts-post0" "http://mymoodle.example.com/course/view.php?id=42" -O "/home/myself/websites/mycourse" "-*/*" "+/login/index.php*" "+*/course/view.php*" "+*/mod/resource/view.php*" "+*/mod/page/view.php*" "+*/mod/forum/view.php*" "+*/mod/forum/discuss.php?d=*[0-9]" "+*/mod/url/view.php*" "+*/pluginfile.php/*" "+*/mod/feedback/view.php*" "+*/mod/feedback/analysis.php*" "+*/theme/*" "-*/course/view.php?id=43"

Let’s comment on these different parameters :

  • -v -z : shows me a verbose log of what’s done, on stdout. May be removed if command needs to execute in batch mode.
  • -%F “Mirrored [from host %s [file %s [at %s]]]” : This adds an additional header to the pages
  • -N “%h%p/%n%[id].%t” : converts URLs like mymoodle.example.com/mod/page/view.php?id=42 to files saved as /home/myself/websites/mycourse/mymoodle.example.com/mod/page/view42.html
  • “http://mymoodle.example.com/login/index.php?>postfile:/home/myself/websites/mycourse/hts-post0″ “http://mymoodle.example.com/course/view.php?id=42″ : these are the two pages to be backed-up :
    • “http://mymoodle.example.com/login/index.php?>postfile:/home/myself/websites/mycourse/hts-post0″ : this performs the login by sending the POST request with login and password
    • “http://mymoodle.example.com/course/view.php?id=42″ : this is the entry page of the course
  • -O “/home/myself/websites/mycourse” : save to this directory. The mirror will be available at file:///home/myself/websites/mycourse/mymoodle.example.com/index.html
  • “-*/*” : don’t follow any links, but the ones explicitely allowed below :
  • “+/login/index.php*” “+*/course/view.php*” “+*/mod/resource/view.php*” “+*/mod/page/view.php*” “+*/mod/forum/view.php*” “+*/mod/forum/discuss.php?d=*[0-9]” “+*/mod/url/view.php*” “+*/pluginfile.php/*” “+*/mod/feedback/view.php*” “+*/mod/feedback/analysis.php*” “+*/theme/*” : this mirrors much of the course contents in our case : pages, PDF attachments, forum posts, etc. (you’ll need to customize this depending on your course contents)
  • “-*/course/view.php?id=43″ : but don’t backup other courses the same user participates to

I hope this will be useful.

NOKUBI Takatsugu: 120th Tokyo area debian seminar

1 December, 2014 - 15:18

I had attend 120th Tokyo area Debian seminar at Shinjuku.

An attendee brought hp Jornada 780 and tried to install Debian, so I helped him.

Using a kernel, boot loader and userland from “Lenny on j720” page and it worked fine, except PCMCIA NIC.

His NIC is Corga PCC-TD is not listed in /etc/pcmcia/*. I didn’t have enough time to write it, so I couldn’t check it.

However, later releases after Lenny don’t have “arm” architecuture, so it should be hard to upgrade it. I don’t know recent Linux kernel work on Jornada 780. The configuration and code for jonrnada 780 are still in the kernel, but it wouldn’t be tested anyone.

 

Gregor Herrmann: RC bugs 2014/47-48

1 December, 2014 - 04:59

these are the RC bugs I've worked on during the last two weeks:

  • #752465 – libgdbm3: "Multi-Arch:same file conflict for any pair of architectures"
    propose to close as duplicate, done so by release team
  • #759960 – src:libcatalyst-engine-psgi-perl: "libcatalyst-engine-psgi-perl: FTBFS: dh_auto_test: make -j1 test returned exit code 2"
    request package removal (pkg-perl)
  • #767010 – kadu-dev: "kadu-dev: KaduTargets.cmake hardcodes amd64 path"
    raise severity, ask about potential fix in recent upload, then close
  • #767671 – ekeyd: "ekeyd: fails to remove: subprocess installed post-removal script returned error exit status 1"
    apply patch from Cameron Norman, upload to DELAYED/2
  • #767842 – ruby-actionpack-action-caching: "ruby-actionpack-action-caching: fails to upgrade from 'wheezy' - trying to overwrite /usr/lib/ruby/vendor_ruby/action_controller/caching/actions.rb"
    propose patch (adding Breaks+Replaces), closed by maintainer the same way
  • #768710 – src:biojava3-live: "biojava3-live: FTBFS in jessie: dh_install: libbiojava3-java-doc missing files (doc/biojava/*), aborting"
    propose patch (add encoding information)
  • #768798 – libgdbm3: "libgdbm3:i386: changelog.Debian.gz different from libgdbm3:amd64"
    diagnose as possible duplicate of #752465
  • #769214 – src:dmtcp: "dmtcp: FTBFS in jessie: test failures"
    add info to the bug report
  • #769336 – request-tracker4: "request-tracker4: fails to upgrade from 'wheezy': /usr/share/dbconfig-common/scripts/request-tracker4/upgrade/sqlite3/4.2.3 exited with non-zero status [SEGFAULT]"
    add info to the bug report, tag unreproducible, close later
  • #769670 – ola-rdm-tests: "ola-rdm-tests: FTBFS in a sid chroot with pbuilder (no network)"
    provide more test results
  • #769820 – par2: "par2: "par2repair file.par2 *" buggy, fixed in latest git version 0.6.11"
    sponsor maintainer upload
  • #769833 – src:trac: "[trac] Some sources are not included in your package"
    propose patch which repacks the tarball, applied and uploaded by maintainer
  • #770411 – mpi-specs: "mpi-specs: postinst uses /usr/share/doc content (Policy 12.3)"
    drop (broken call from postinst and then whole) postinst, upload to DELAYED/5
  • #770648 – src:hiredis: "hiredis: FTBFS: Test failure"
    more triaging/testing
  • #770762 – src:libinline-java-perl: "libinline-java-perl: Build dependencies are too loose"
    upload package prepared by Peter Pentchev (pkg-perl)
  • #770844 – src:libinline-java-perl: "[libinline-java-perl] FTBFS twice in a row"
    upload package prepared by Peter Pentchev (pkg-perl)
  • #770845 – src:libinline-java-perl: "[libinline-java-perl] FTBFS: Assumes a decimal point during the tests"
    upload package prepared by Peter Pentchev (pkg-perl)
  • #771361 – sponsorship-requests: "RFS: roxterm/2.9.5-1"
    sponsor maintainer upload

Enrico Zini: cxx11-talk-examples

1 December, 2014 - 01:26
C++11 talk examples

On 2014-11-27 I gave a talk about C++ and new features introduced with C++11: these are the examples. They are all licensed under the wtfpli version 2. See cxx11-talk-notes for the talk notes.

Note that the wrapper interfaces turns errors from the underlying libraries into exceptions, so the method calls just do what they should, without the need of documenting special return values for error messages, and removing the need for each library to implement yet another way of reporting errors.

Also note that all wrapper objects do RAII: you create them and they clean after themselves when they go out of scope.

The wrapper objects also have cast operators to make them behave as the pointer or handle that they are wrapping, so that they can be transparently passed to the underlying libraries.

A gcrypt hash class

This class is a light wrapper around gcrypt's hashing functions.

ezhash.h
#ifndef EZHASH_H
#define EZHASH_H

#include <string>
#include <gcrypt.h>

namespace ezhash {

class Hash
{
protected:
    // members can now be initialized just like this, without needing to repeat
    // their default assignment in every constructor
    gcry_md_hd_t handle = nullptr;

public:
    Hash(int algo, unsigned int flags=0);
    ~Hash();

    // Assign 'delete' to a method to tell the compiler not to generate it
    // automatically. In this case, we make the object non-copiable.
    Hash(const Hash&) = delete;
    Hash(const Hash&&) = delete;
    Hash& operator=(const Hash&) = delete;

    // Add a buffer to the hash
    void hash_buf(const std::string& buf);

    // Add the contents of a file to the hash
    void hash_file(int fd);

    // Get a string with the hexadecimal hash
    std::string read_hex(int algo=0);

    /// Pretend that we are a gcry_md_hd_t handle
    operator gcry_md_hd_t() { return handle; }
};

}

#endif
ezhash.cpp
#include "ezhash.h"
#include <unistd.h>
#include <errno.h>
#include <string>
#include <cstring>
#include <sstream>
#include <iomanip>
#include <stdexcept>

using namespace std;

namespace ezhash {

namespace {

// noreturn attribute, to tell the compiler that this function never returns
span>noreturn<span class="hl opt"> void throw_gcrypt_error(gcry_error_t err)
{
    string msg;
    msg += gcry_strsource(err);
    msg += "/";
    msg += gcry_strerror(err);
    throw runtime_error(msg);
}

string errno_str(int error)
{
    char buf[256];
#if (_POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600) && ! _GNU_SOURCE
    strerror_r(errno, buf, 256);
    string res(buf);
#else
    string res(strerror_r(errno, buf, 256));
#endif
    return res;
}

span>noreturn<span class="hl opt"> void throw_libc_error(int error)
{
    throw runtime_error(errno_str(error));
}

}


Hash::Hash(int algo, unsigned int flags)
{
    gcry_error_t err = gcry_md_open(&handle, algo, flags);
    if (err) throw_gcrypt_error(err);
}

Hash::~Hash()
{
    gcry_md_close(handle);
}

void Hash::hash_buf(const std::string& buf)
{
    gcry_md_write(handle, buf.data(), buf.size());
}

void Hash::hash_file(int fd)
{
    char buf[4096];
    while (true)
    {
        ssize_t res = ::read(fd, buf, 4096);
        if (res < 0) ezfs::throw_libc_error();
        if (res == 0) break;
        gcry_md_write(handle, buf, res);
    }
}

std::string Hash::read_hex(int algo)
{
    unsigned char* res = gcry_md_read(handle, algo);

    unsigned int len = gcry_md_get_algo_dlen(
            algo == 0 ? gcry_md_get_algo(handle) : algo);

    // Format the hash into a hex digit
    stringstream hexbuf;
    hexbuf << hex << setfill('0');
    for (unsigned i = 0; i < len; ++i)
        hexbuf << setw(2) << (unsigned)res[i];

    return hexbuf.str();
}

}
Example usage
        ezhash::Hash sha256(GCRY_MD_SHA256);
        sha256.hash_buf("ciao\n");
        sha256.hash_buf("foo\n");
        cout << sha256.read_hex() << endl;
Simple sqlite bindings

Remarkably simple sqlite3 bindings based on lambda callbacks.

ezsqlite.h
#ifndef EZSQLITE_H
#define EZSQLITE_H

#include <sqlite3.h>
#include <string>
#include <functional>
#include <stdexcept>

namespace ezsqlite {

/// RAII wrapper around a sqlite3 database handle
class DB
{
protected:
    sqlite3* handle = nullptr;

public:
    // Open a connection to a SQLite database
    DB(const std::string& filename, int flags=SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE);
    DB(const DB&) = delete;
    DB(const DB&&) = delete;
    DB& operator=(const DB&) = delete;
    ~DB();

    /**
     * Execute a query, optionally calling 'callback' on every result row
     *
     * The arguments to callback are:
     *  1. number of columns
     *  2. text values of the columns
     *  3. names of the columns
     */
    // std::function can be used to wrap any callable thing in C++
    // see: http://en.cppreference.com/w/cpp/utility/functional/function
    void exec(const std::string& query, std::function<bool(int, char**, char**)> callback=nullptr);

    /// Pretend that we are a sqlite3 pointer
    operator sqlite3*() { return handle; }
};

}

#endif
ezsqlite.cpp
#include "ezsqlite.h"

namespace ezsqlite {

DB::DB(const std::string& filename, int flags)
{
    int res = sqlite3_open_v2(filename.c_str(), &handle, flags, nullptr);
    if (res != SQLITE_OK)
    {
        // From http://www.sqlite.org/c3ref/open.html
        // Whether or not an error occurs when it is opened, resources
        // associated with the database connection handle should be
        // released by passing it to sqlite3_close() when it is no longer
        // required.
        std::string errmsg(sqlite3_errmsg(handle));
        sqlite3_close(handle);
        throw std::runtime_error(errmsg);
    }
}

DB::~DB()
{
    sqlite3_close(handle);
}

namespace {

// Adapter to have sqlite3_exec call a std::function
int exec_callback(void* data, int columns, char** values, char** names)
{
    std::function<bool(int, char**, char**)> cb = *static_cast<std::function<bool(int, char**, char**)>*>(data);
    return cb(columns, values, names);
}

}

void DB::exec(const std::string& query, std::function<bool(int, char**, char**)> callback)
{
    char* errmsg;
    void* cb = callback ? &callback : nullptr;
    int res = sqlite3_exec(handle, query.c_str(), exec_callback, cb, &errmsg);
    if (res != SQLITE_OK && errmsg)
    {
        // http://www.sqlite.org/c3ref/exec.html
        //
        // If the 5th parameter to sqlite3_exec() is not NULL then any error
        // message is written into memory obtained from sqlite3_malloc() and
        // passed back through the 5th parameter. To avoid memory leaks, the
        // application should invoke sqlite3_free() on error message strings
        // returned through the 5th parameter of of sqlite3_exec() after the
        // error message string is no longer needed. 
        std::string msg(errmsg);
        sqlite3_free(errmsg);
        throw std::runtime_error(errmsg);
    }
}

}
Example usage
    // Connect to the database
    ezsqlite::DB db("erlug.sqlite");

    // Make sure we have a table
    db.exec(R"(
        CREATE TABLE IF NOT EXISTS files (
                name TEXT NOT NULL,
                sha256sum TEXT NOT NULL
        )
    )");

    // Read the list of files that we know
    map<string, string> files;
    db.exec("SELECT name, sha256sum FROM files", [&](int columns, char** vals, char** names) {
        if (columns != 2) return false;
        files.insert(make_pair(vals[0], vals[1]));
        return true;
    });
A fast Directory object

This is a lightweight wrapper around O_PATH file descriptors for directories. I'd love to see a library of well-maintained and thin C++ bindings around libc, that do little more than turning errors into exceptions and making it also work with std::string buffers.

ezfs.h
#ifndef EZFS_H
#define EZFS_H

#include <string>
#include <functional>
#include <memory>
#include <cerrno>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <dirent.h>

namespace ezfs {

class Directory
{
protected:
    int handle = -1;

public:
    Directory(const std::string& pathname, int flags=0);
    ~Directory();
    Directory(const Directory&) = delete;
    Directory(const Directory&&) = delete;
    Directory& operator=(const Directory&) = delete;

    /// List the directory contents
    void ls(std::function<void(const dirent&)> callback);

    int open(const std::string& relpath, int flags, mode_t mode=0777);
};

std::string errno_str(int error=errno);
span>noreturn<span class="hl opt"> void throw_libc_error(int error=errno);

}

#endif
ezfs.cpp
#include "ezfs.h"
#include <stdexcept>
#include <memory>
#include <cstring>
#include <cstdlib>
#include <string>
#include <linux/limits.h>

using namespace std;

namespace ezfs {

string errno_str(int error)
{
    char buf[256];
#if (_POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600) && ! _GNU_SOURCE
    strerror_r(errno, buf, 256);
    string res(buf);
#else
    string res(strerror_r(errno, buf, 256));
#endif
    return res;
}

span>noreturn<span class="hl opt"> void throw_libc_error(int error)
{
    throw runtime_error(errno_str(error));
}

Directory::Directory(const std::string& pathname, int flags)
{
    handle = ::open(pathname.c_str(), O_PATH | O_DIRECTORY | flags);
    if (handle < 0) throw_libc_error();
}

Directory::~Directory()
{
    ::close(handle);
}

void Directory::ls(std::function<void(const dirent&)> callback)
{
    int fd = openat(handle, ".", O_DIRECTORY);
    if (fd < 0) throw_libc_error();

    // RAII Self-cleaning DIR object
    unique_ptr<DIR, std::function<void(DIR*)>> dir(fdopendir(fd), [](DIR* dir) { if (dir) closedir(dir); });
    if (!dir)
    {
        // fdopendir(3): After a successful call to fdopendir(), fd is used
        // internally by the implementation, and should not otherwise be used
        // by the application.
        //
        // but if the fdopendir call was not successful, fd is not managed by
        // DIR, and we still need to close it, otherwise we leak a file
        // descriptor.
        //
        // However, close() may modify errno, so we take note of the errno set
        // by fdopendir and raise the exception based on that.
        int fdopendir_errno = errno;
        close(fd);
        throw_libc_error(fdopendir_errno);
    }

    // Size the dirent buffer properly
    const unsigned len = offsetof(dirent, d_name) + PATH_MAX + 1;
    unique_ptr<dirent, std::function<void(void*)>> dirbuf((dirent*)malloc(len), free);

    while (true)
    {
        dirent* res;
        int err = readdir_r(dir.get(), dirbuf.get(), &res);

        // End of directory contents
        if (err == 0)
        {
            if (res)
                callback(*res);
            else
                break;
        } else
            throw_libc_error(err);
    }
}

int Directory::open(const std::string& relpath, int flags, mode_t mode)
{
    int res = openat(handle, relpath.c_str(), flags, mode);
    if (res < 0) throw_libc_error();
    return res;
}

}
Example usage
        // This is almost the equivalent of running "sha256sum ."
        ezfs::Directory dir(".");
        dir.ls([&](const dirent& d) {
            if (d.d_type != DT_REG) return;

            ezhash::Hash sha256(GCRY_MD_SHA256);
            // I have no RAII wrapper around file handles at the moment, so
            // I'll have to use a try/catch for cleaning up after errors
            int fd = dir.open(d.d_name, O_RDONLY);
            try {
                sha256.hash_file(fd);
                close(fd);
            } catch (...) {
                close(fd);
                throw;
            }

            cout << sha256.read_hex() << "  " << d.d_name << endl;
        });

Thorsten Alteholz: My Debian Activities in November 2014

1 December, 2014 - 01:18

FTP assistant

In contrast to the last month, this month has been rather quiet and I really liked that . The stress has moved to the next team. So all in all I marked 101 packages for accept and had to reject 27 packages. As I mostly reviewed really new packages, I didn’t have to file any RC bug this month.

Squeeze LTS

This was my fifth month that I did some work for the Squeeze LTS initiative, started by Raphael Hertzog at Freexian.

This month I got assigned a workload of 14.25h and I spent these hours to upload new versions of:

  • [DLA 82-1] wget security update
  • [DLA 84-1] curl security update
  • [DLA 89-1] nss security update
  • [DLA 90-1] imagemagick security update
  • [DLA 94-1] php5 security update
  • [DLA 97-1] eglibc security update

I also uploaded [DLA 85-1] libxml-security-java security update, but as nobody of the LTS sponsors had any interest in this package, I did this in my “spare” time. A package with security in its name should not be affected by security issues.

This month my failure of the month has been the binutils package. Although the security team prepared the way for finding the correct patches for all those CVEs, I somehow managed to not find them. This is embarassing …

I am also a bit disappointed by current LTS users. All important packages have been made available for testing before uploading them to the archive. Apart from some brave fellow DDs, no other feedback was reported on debian-lts. Complaints arrived only when the packages have been finally uploaded. Do admins have enough time nowadays and don’t need to use some kind of testbed? Times are changing …

Other packages

This month I even found some time to sponsor uploads, so please welcome a new version of fastaq in experimental and patiently wait for aegaen and kmc to pass NEW.

At this point I also want to mention the Debian Med Advent Calendar, which was announced in this email and already mentioned by Andreas in his latest Debian Med bits. Everybody is invited to take care of as much as possible poor souls.

Support

If you would like to support my Debian work you could either be part of the Freexian initiative (see above) or consider to send some bitcoins to 1JHnNpbgzxkoNexeXsTUGS6qUp5P88vHej. Contact me at donation@alteholz.eu if you prefer another way to donate. Every kind of support is most appreciated.

Enrico Zini: cxx11-talk-notes

1 December, 2014 - 00:52
C++11 talk notes

On 2014-11-27 I gave a talk about C++ and new features introduced with C++11: these are the talk notes. See cxx11-talk-examples for the examples.

Overview of programming languages

It has to be as fast as possible, so interpreted languages are out.

You don't want to micro manage memory, so C is out.

You don't want to require programmers to have a degree, so C++ is out.

You want fast startup and not depend on a big runtime, so Java is out.

[...]

(Bram Moolenaar)

C++ secret cultist protip

Do not call up what you cannot put down.

C++ is a compiled language

It is now possible to use the keyword constexpr to mark functions and objects that can be used at compile time:

/*
 * constexpr tells the compiler that a variable or function can be evaluated at
 * compile time.
 *
 * constexpr functions can also be run at run time, if they are called with
 * values not known at compile time.
 *
 * See http://en.cppreference.com/w/cpp/language/constexpr for more nice examples
 *
 * It can be used to avoid using constants in code, and using instead functions
 * for computing hardware bitfields or physical values, without losing in
 * efficiency.
 */

#include <iostream>

using namespace std;

constexpr int factorial(int n)
{
    return n <= 1 ? 1 : (n * factorial(n-1));
}

int main()
{
    cout << "Compile time factorial of 6: " << factorial(6) << endl;

    cout << "Enter a number: ";
    int a;
    cin >> a;

    cout << "Run time factorial of " << a << ": " << factorial(a) << endl;
}

See also this for more nice examples. See this and this for further discussion.

Multiline strings
        const char* code = R"--(
          printf("foo\tbar\n");
          return 0;
        )--";

See this.

C++ memory management protip

RAII: Resource Acquisition Is Instantiation

This is not new in C++11, but in my experience I have rarely seen it mentioned in C++ learning material, and it does make a huge difference in my code.

See this and this for details.

Constructors and member initializer lists

Initializers in curly braces now have their own type: std::initializer_list:

#include <string>
#include <iostream>
#include <unordered_set>

using namespace std;

// std::initializer_list<…>
//   will have as its value all the elements inside the curly braces

string join(initializer_list<string> strings)
{
    string res;
    for (auto str: strings)
    {
        if (!res.empty())
            res += ", ";
        res += str;
    }
    return res;
}

int main()
{
    unordered_set<string> blacklist{ ".", "..", ".git", ".gitignore" };

    cout << join({ "antani", "blinda" }) << endl;
}

See this for details, including the new uniform initialization trick of omitting parentesis in constructors so that you can call normal constructors and initializer_list constructors with the same syntax, which looks like an interesting thing when writing generic code in templates.

Type inference

I can now use auto instead of a type to let the compiler automatically compute the value of something I assign to:

        auto i = 3 + 2;

        // See also https://github.com/esseks/monicelli
        vector<string> names{ "antani", "blinda", "supercazzola" };
        for (auto i = names.cbegin(); i != names.cend(); ++i)
            cout << i;

        template<typename T>
        T frobble(const T& stuff)
        {
             // This will work whatever type is returned by stuff.read()
             auto i = stuff.read();
             // …
        }

See this for more details.

Range-based for loop

C++ now has an equivalent of the various foreach constructs found in several interpreted languages!

        for (auto i: list_of_stuff)
                cout << i << endl;

        for (auto n: {0,1,2,3,4,5})
                cout << n << endl;

        // This construct:
        for (auto i: stuff)

        // If stuff is an array, it becomes:
        for (i = stuff, i < stuff + sizeof(stuff) / sizeof(stuff[0]); ++i)

        // If stuff has .begin() and .end() methods it becomes:
        for (i = stuff.begin(); i != stuff.end(); ++i)

        // Otherwise it becomes:
        for (i = begin(stuff); i != end(stuff); ++i)

        // And you can define begin() and end() functions for any type you
        // want, at any time

See this and this for details.

Lambda functions and expressions

Lambdas! Closures!

Something like this:

// JavaScript
var add = function(a, b) { return a + b; }
# Python
add = lambda a, b: a + b

Becomes this:

auto add = [](int a, int b) { return a + b; }

And something like this:

// JavaScript
var a = 0;
$.each([1, 2, 3, 4], function(idx, el) { a += el });

Becomes this:

unsigned a = 0;
std::for_each({ 1, 2, 3, 4 }, [&a](int el) { return a += el; });

See this, this and this.

Tuple types

C++ now has a std::tuple type, that like in Python can be used to implement functions that return multiple values:

        tuple<int, string, vector<string>> parse_stuff()
        {
                return make_tuple(id, name, values);
        }

        string name; vector<string> values;

        // std::ignore can be used to throw away a result
        tie(ignore, name, values) = parse_stuff();

        // std::tie can also be used to do other kind of
        // multi-operations besides assignment:
        return tie(a, b, c) < tie(a1, b1, c1);
        // Is the same as:
        if (a != a1) return a < a1;
        if (b != b1) return b < b1;
        return c < c1;

See here, here and here.

Regular expressions

We now have regular expressions!

        std::regex re(R"((\w+)\s+(\w+))");
        string s("antani blinda");
        smatch res;

        if (regex_match(s, res, re))
            cout << "OK " << res[1] << " -- " << res[2] << endl;

The syntax is ECMAScript by default and can be optionally changed to basic, extended, awk, grep, or egrep.

See here and here.

General-purpose smart pointers

There is std::unique_ptr to code memory ownership explicitly, and std::shared_ptr as a reference counted pointer, and smart pointers can have custom destructors:

    unique_ptr<dirent, std::function<void(void*)>> dirbuf((dirent*)malloc(len), free);

See here and here.

Miscellaneous other cool things Standard attribute specifiers
string errno_str(int error)
{
    char buf[256];
#if (_POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600) && ! _GNU_SOURCE
    strerror_r(errno, buf, 256);
    string res(buf);
#else
    string res(strerror_r(errno, buf, 256));
#endif
    return res;
}

span>noreturn<span class="hl opt"> void throw_libc_error(int error)
{
    throw runtime_error(errno_str(error));
}

See here.

Hash tables

See here and look at the new containers unordered_set, unordered_map, unordered_multiset, and unordered_multimap.

Multithreading

There is a standard threading model, with quite a bit of library support: see here, here, here, and here for atomic data structures.

Variadic templates

Templates can now take variable number of arguments, and that opens possibilities for interesting code generation, like implementing a generic, type-safe printf statement, or something like this:

db.query(R"(
   INSERT INTO table NAMES (id, name, description)
     VALUES (?, ?, ?)
)", 4, "genio", "fantasia, intuizione, decisione, e velocità di esecuzione");

See here and here.

Essential tools

You need at least g++ 4.8 or clang 3.3 to have full C++11 support. They will be both available in jessie, and for wheezy you can use the nightly clang packages repository. I cannot think of a good excuse not to use -Wall on new code.

scan-build from clang is another nice resource for catching even more potential problems at compile time.

valgrind is a great tool for runtime code analysis: valgrind --tool=memcheck (the default) will check your program for wrong memory accesses and memory leaks. valgrind --tool=callgrind will trace function calls for profiling, to be analyzed with kcachegrind. valgrind --tool=helgrind can check multi-threaded programs for suspicious concurrent memory accesse patterns.

And of course gdb: a nice trick with C++ is to issue break __cxa_throw to get a breakpoint at the point where an exception is being thrown.

Coredump tips: ulimit -c to enable core dumps, triggering a core dump with ^\, opening a core with gdb program core, and more details on man 5 core.

An extra gdb tip, which is not related to C++ but helped me considerably recently, is that it can be attached to running python programs to get a live Python traceback.

Johannes Schauer: simple email setup

30 November, 2014 - 23:39

I was unable to find a good place that describes how to create a simple self-hosted email setup. The most surprising discovery was, how much already works after:

apt-get install postfix dovecot-imapd

Right after having finished the installation I was able to receive email (but only in in /var/mail in mbox format) and send email (bot not from any other host). So while I expected a pretty complex setup, it turned out to boil down to just adjusting some configuration parameters.

Postfix

The two interesting files to configure postfix are /etc/postfix/main.cf and /etc/postfix/master.cf. A commented version of the former exists in /usr/share/postfix/main.cf.dist. Alternatively, there is the ~600k word strong man page postconf(5). The latter file is documented in master(5).

/etc/postfix/main.cf

I changed the following in my main.cf

@@ -37,3 +37,9 @@
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
+
+home_mailbox = Mail/
+smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination permit_sasl_authenticated
+smtpd_sasl_type = dovecot
+smtpd_sasl_path = private/auth
+smtp_helo_name = my.reverse.dns.name.com

At this point, also make sure that the parameters smtpd_tls_cert_file and smtpd_tls_key_file point to the right certificate and private key file. So either change these values or replace the content of /etc/ssl/certs/ssl-cert-snakeoil.pem and /etc/ssl/private/ssl-cert-snakeoil.key.

The home_mailbox parameter sets the default path for incoming mail. Since there is no leading slash, this puts mail into $HOME/Mail for each user. The trailing slash is important as it specifies ``qmail-style delivery'' which means maildir.

The default of the smtpd_recipient_restrictions parameter is permit_mynetworks reject_unauth_destination so this just adds the permit_sasl_authenticated option. This is necessary to allow users to send email when they successfully verified their login through dovecot. The dovecot login verification is activated through the smtpd_sasl_type and smtpd_sasl_path parameters.

I found it necessary to set the smtp_helo_name parameter to the reverse DNS of my server. This was necessary because many other email servers would only accept email from a server with a valid reverse DNS entry. My hosting provider charges USD 7.50 per month to change the default reverse DNS name, so the easy solution is, to instead just adjust the name announced in the SMTP helo.

/etc/postfix/master.cf

The file master.cf is used to enable the submission service. The following diff just removes the comment character from the appropriate section.

@@ -13,12 +13,12 @@
#smtpd pass - - - - - smtpd
#dnsblog unix - - - - 0 dnsblog
#tlsproxy unix - - - - 0 tlsproxy
-#submission inet n - - - - smtpd
-# -o syslog_name=postfix/submission
-# -o smtpd_tls_security_level=encrypt
-# -o smtpd_sasl_auth_enable=yes
-# -o smtpd_client_restrictions=permit_sasl_authenticated,reject
-# -o milter_macro_daemon_name=ORIGINATING
+submission inet n - - - - smtpd
+ -o syslog_name=postfix/submission
+ -o smtpd_tls_security_level=encrypt
+ -o smtpd_sasl_auth_enable=yes
+ -o smtpd_client_restrictions=permit_sasl_authenticated,reject
+ -o milter_macro_daemon_name=ORIGINATING
#smtps inet n - - - - smtpd
# -o syslog_name=postfix/smtps
# -o smtpd_tls_wrappermode=yes
Dovecot

Since above configuration changes made postfix store email in a different location and format than the default, dovecot has to be informed about these changes as well. This is done in /etc/dovecot/conf.d/10-mail.conf. The second configuration change enables postfix to authenticate users through dovecot in /etc/dovecot/conf.d/10-master.conf. For SSL one should look into /etc/dovecot/conf.d/10-ssl.conf and either adapt the parameters ssl_cert and ssl_key or store the correct certificate and private key in /etc/dovecot/dovecot.pem and /etc/dovecot/private/dovecot.pem, respectively.

The dovecot-core package (which dovecot-imapd depends on) ships tons of documentation. The file /usr/share/doc/dovecot-core/dovecot/documentation.txt.gz gives an overview of what resources are available. The path /usr/share/doc/dovecot-core/dovecot/wiki contains a snapshot of the dovecot wiki at http://wiki2.dovecot.org/. The example configurations seem to be the same files as in /etc/ which are already well commented.

/etc/dovecot/conf.d/10-mail.conf

The following diff changes the default email location in /var/mail to a maildir in ~/Mail as configured for postfix above.

@@ -27,7 +27,7 @@
#
# <doc/wiki/MailLocation.txt>
#
-mail_location = mbox:~/mail:INBOX=/var/mail/%u
+mail_location = maildir:~/Mail

# If you need to set multiple mailbox locations or want to change default
# namespace settings, you can do it by defining namespace sections.
/etc/dovecot/conf.d/10-master.conf

And this enables the authentication socket for postfix:

@@ -93,9 +93,11 @@
}

# Postfix smtp-auth
- #unix_listener /var/spool/postfix/private/auth {
- # mode = 0666
- #}
+ unix_listener /var/spool/postfix/private/auth {
+ mode = 0660
+ user = postfix
+ group = postfix
+ }

# Auth process is run as this user.
#user = $default_internal_user
Aliases

Now Email will automatically put into the '~/Mail' directory of the receiver. So a user has to be created for whom one wants to receive mail...

$ adduser josch

...and any aliases for it to be configured in /etc/aliases.

@@ -1,2 +1,4 @@
-# See man 5 aliases for format
-postmaster: root
+root: josch
+postmaster: josch
+hostmaster: josch
+webmaster: josch

After editing /etc/aliases, the command

$ newaliases

has to be run. More can be read in the aliases(5) man page.

Finishing up

Everything is done and now postfix and dovecot have to be informed about the changes. There are many ways to do that. Either restart the services, reboot or just do:

$ postfix reload
$ doveadm reload

Have fun!

Hideki Yamane: Kadokawa Course Internet vol.2 "Open Source - composes Internet, evolution of software"

30 November, 2014 - 12:33
Now it's out -you can get Kadokawa Course Internet vol.2 "Open Source - composes Internet, evolution of software" (角川インターネット講座 (2) ネットを支えるオープンソース ソフトウェアの進化) at a bookstore.

 Yukihiro "Matz" Matsumoto, probably you know as programming language "Ruby" author, is supervisor for this book but more people involved in it. What I wrote is a basic overview for Open Source Software Licensing (with 32 pages in Japanese) because I've learned some  about software license through my activity in Debian (thanks! :-)

 It was though a bit because its deadline is really tight, so I had to write it even during DebConf14 in Portland (this is the reason why I didn't go to Pub to drink beer in the last night ;)
Hope you enjoy it (Wasshoi!!).

Laura Arjona: Debian meeting in Madrid, GPG keysigning

30 November, 2014 - 05:47

Yesterday some Debian people in Madrid we met to have a drink together and GPG keysigning.I have received 2 signs already #oleole! :)Some minutes ago, I signed all the keys and sent the corresponding emails. Since I have to dedicate some minutes to verify fingerprints and emails, I have dedicated some time to remember each person too: his face, the topics that we talked about, etc.  (we were 8 people!).
It’s been nice to meet Debian people in person, in Madrid (it was the first time I attended). I like to speak Spanish in a Debian context (and different than the translators mailing list), it’s kind of funny, relaxing. It has been also an opportunity to meet new people that can be very different from me, and so, open my mind to new thoughts.

I hope we meet more often so we strengthen not only the web of trust but also the face-to-face social network :)


Filed under: Uncategorized Tagged: Communities, Debian, encryption, English, gpg, social networks

Dirk Eddelbuettel: RcppArmadillo 0.4.550.1.0

29 November, 2014 - 18:46

A week ago, Conrad provided another minor release 4.550.0 of Armadillo which has since received one minor correction in 4.550.1.0. As before, I had created a GitHub-only pre-release of his pre-release which was tested against the almost one hundred CRAN dependents of our RcppArmadillo package. This passed fine as usual, and results are as always in the rcpp-logs repository.

Processing and acceptance at the CRAN took a little longer as around the same time a fresh failure in unit tests had become apparent on an as-of-yet unannounced new architecture (!!) also tested at CRAN. The R-devel release has since gotten a new capabilities() test for long double, and we now only run this test (for our rmultinom()) if the test asserts that the given R build has this capability. Phew, so with all that the new version in now on CRAN; Windows binaries have been built and I also uploaded new Debian binaries.

Changes are summarized below; our end also includes added support for conversion of Field types takes to short pull request by Romain.

Changes in RcppArmadillo version 0.4.550.1.0 (2014-11-26)
  • Upgraded to Armadillo release Version 4.550.1 ("Singapore Sling Deluxe")

    • added matrix exponential function: expmat()

    • faster .log_p() and .avg_log_p() functions in the gmm_diag class when compiling with OpenMP enabled

    • faster handling of in-place addition/subtraction of expressions with an outer product

    • applied correction to gmm_diag relative to the 4.550 release

  • The Armadillo Field type is now converted in as<> conversions

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Holger Levsen: 20141129-uppps

29 November, 2014 - 17:26
uppps

it seems I had set up auto deployment and forgotten about it. yay automatisms.

Dirk Eddelbuettel: CRAN Task Views for Finance and HPC now (also) on GitHub

29 November, 2014 - 09:14

The CRAN Task View system is a fine project which Achim Zeileis initiated almost a decade ago. It is described in a short R Journal article in Volume 5, Number 1. I have been editor / maintainer of the Finance task view essentially since the very beginning of these CRAN Task Views, and added the High-Performance Computing one in the fall of 2008. Many, many people have helped by sending suggestions or even patches; email continues to be the main venue for the changes.

The maintainers of the Web Technologies task view were, at least as far as I know, the first to make the jump to maintaining the task view on GitHub. Karthik and I briefly talked about this when he was in town a few weeks ago for our joint Software Carpentry workshop at Northwestern.

So the topic had been on my mind, but it was only today that I realized that the near-limitless amount of awesome that is pandoc can probably help with maintenance. The task view code by Achim neatly converts the very regular, very XML, very boring original format into somewhat-CRAN-website-specific html. Pandoc, being as versatile as it is, can then make (GitHub-flavoured) markdown out of this, and with a minimal amount of sed magic, we get what we need.

And hence we now have these two new repos:

Contributions are now most welcome by pull request. You can run the included converter scripts, it differs between both repos only by one constant for the task view / file name. As an illustration, the one for Finance is below.

#!/usr/bin/r
## if you do not have /usr/bin/r from littler, just use Rscript

ctv <- "Finance"

ctvfile  <- paste0(ctv, ".ctv")
htmlfile <- paste0(ctv, ".html")
mdfile   <- "README.md"

## load packages
suppressMessages(library(XML))          # called by ctv
suppressMessages(library(ctv))

r <- getOption("repos")                 # set CRAN mirror
r["CRAN"] <- "http://cran.rstudio.com"
options(repos=r)

check_ctv_packages(ctvfile)             # run the check

## create html file from ctv file
ctv2html(read.ctv(ctvfile), htmlfile)

### these look atrocious, but are pretty straight forward. read them one by one
###  - start from the htmlfile
cmd <- paste0("cat ", htmlfile,
###  - in lines of the form  ^<a href="Word">Word.html</a>
###  - capture the 'Word' and insert it into a larger URL containing an absolute reference to task view 'Word'
  " | sed -e 's|^<a href=\"\\([a-zA-Z]*\\)\\.html|<a href=\"http://cran.rstudio.com/web/views/\\1.html\"|' | ",
###  - call pandoc, specifying html as input and github-flavoured markdown as output
              "pandoc -s -r html -w markdown_github | ",
###  - deal with the header by removing extra ||, replacing |** with ** and **| with **:              
              "sed -e's/||//g' -e's/|\\*\\*/\\*\\*/g' -e's/\\*\\*|/\\*\\* /g' -e's/|$/  /g' ",
###  - make the implicit URL to packages explicit
              "-e's|../packages/|http://cran.rstudio.com/web/packages/|g' ",
###  - write out mdfile
              "> ", mdfile)

system(cmd)                             # run the conversion

unlink(htmlfile)                        # remove temporary html file

cat("Done.\n")

I am quite pleased with this setup---so a quick thanks towards the maintainers of the Web Technologies task view; of course to Achim for creating CRAN Task Views in the first place, and maintaining them all those years; as always to John MacFarlance for the magic that is pandoc; and last but not least of course to anybody who has contributed to the CRAN Task Views.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Holger Levsen: 20141129-debconf4

29 November, 2014 - 06:41
#debconf4 debconf4 was the first irc channel I joined and I have been there ever since. Last week I left, not sure if I want to find the way back.

Richard Hartmann: Release Critical Bug report for Week 48

29 November, 2014 - 03:16

Holy bug-count-drop, Batman!

Some bugs which need loving:

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1143 (Including 186 bugs affecting key packages)
    • Affecting Jessie: 274 (key packages: 131) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 189 (key packages: 99) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 46 bugs are tagged 'patch'. (key packages: 27) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 11 bugs are marked as done, but still affect unstable. (key packages: 5) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 132 bugs are neither tagged patch, nor marked done. (key packages: 67) Help make a first step towards resolution!
      • Affecting Jessie only: 85 (key packages: 32) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 59 bugs are in packages that are unblocked by the release team. (key packages: 18)
        • 26 bugs are in packages that are not unblocked. (key packages: 14)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 50 204 (148+56) 339 (195+144) 51 178 (124+54) 323 (190+133) 52 115 (78+37) 289 (190+99) 1 93 (60+33) 287 (171+116) 2 82 (46+36) 271 (162+109) 3 25 (15+10) 249 (165+84) 4 14 (8+6) 244 (176+68) 5 2 (0+2) 224 (132+92) 6 release! 212 (129+83) 7 release+1 194 (128+66) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Daniel Pocock: XCP / XenServer and Debian Jessie

28 November, 2014 - 21:18

In 2013, Debian wheezy was released with a number of great virtualization options include the Xen Cloud Platform (XCP / Xen-API) toolstack packaged by Thomas Goirand to run in a native Debian host environment.

Unfortunately, XCP is not available as a host (dom0) solution for the upcoming Debian 8 (jessie) release. However, it is possible to continue running a Debian wheezy system as the dom0 host and run virtualized (domU) jessie systems inside it. It may also be possible to use the packages from wheezy on a jessie system, but I haven't looked into that myself so far.

Newer kernel boot failures in Xen

After successfully upgrading a VM (domU in Xen terminology) from wheezy to jessie, I tried to reboot the VM and found that it wouldn't start. People have reported similar problems booting newer versions of Ubuntu and Fedora in XCP and XenServer environments. PyGrub displayed an error on the dom0 console:

# xe vm-start name-label=server05
Error code: Using  to parse /grub/grub.cfg
Error parameters: Traceback (most recent call last):,
   File "/usr/lib/xcp/lib/pygrub.xcp", line 853, in ,
     raise RuntimeError, "Unable to find partition containing kernel"

There is a quick and easy workaround. Hard-code the kernel and initrd filenames into config values that will be used to boot. A more thorough solution will probably involve using a newer version of PyGrub in wheezy.

If the /boot tree is a separate filesystem inside the VM, use commands like the following (substitute the correct UUID for the VM and the exact names/versions of the vmlinuz and initrd.img files):

xe vm-param-set uuid=da654fd0-74db-11e4-82f8-0800200c9a66 \
   PV-bootloader-args="--kernel=/vmlinuz-3.16-3-amd64
   --ramdisk=/initrd.img-3.16-3-amd64"

xe vm-param-set uuid=da654fd0-74db-11e4-82f8-0800200c9a66 \
   PV-args="root=/dev/mapper/vg00-root ro quiet"

and if /boot is on the root filesystem of the VM, this will do the trick:

xe vm-param-set uuid=da654fd0-74db-11e4-82f8-0800200c9a66 \
   PV-bootloader-args="--kernel=/boot/vmlinuz-3.16-3-amd64
   --ramdisk=/boot/initrd.img-3.16-3-amd64"

xe vm-param-set uuid=da654fd0-74db-11e4-82f8-0800200c9a66 \
   PV-args="root=/dev/mapper/vg00-root ro quiet"

Future strategy

Once a comprehensive XCP solution appears in Debian again, hopefully it will be possible to migrate running VMs into the new platform without any downtime and retire the wheezy dom0.

Other upgrade/migration options exist and the choice will depend on various factors, such as whether or not you have built your own tools around the XCP API and whether you use a solution like OpenStack that depends on it. Debian's pkg-xen-devel mailing list may be a good place to discuss these options further.

Ian Wienand: rstdiary

28 November, 2014 - 09:15

I find it very useful to spend 5 minutes a day to keep a small log of what was worked on, major bugs or reviews and a general small status report. It makes rolling up into a bigger status report easier when required, or handy as a reference before you go into meetings etc.

I was happily using an etherpad page until I couldn't save any more revisions and the page got too long and started giving javascript timeouts. For a replacement I wanted a single file as input with no boilerplate to aid in back-referencing and adding entries quickly. It should be formatted to be future-proof, as well as being emacs, makefile and git friendly. Output should be web-based so I can refer to it easily and point people at it when required, but it just has to be rsynced to public_html with zero setup.

rstdiary will take a flat RST based input file and chunk it into some reasonable looking static-HTML that looks something like this. It's split by month with some minimal navigation. Copy the output directory somewhere and it is done.

It might also serve as a small example of parsing and converting RST nodes where it does the chunking; unfortunately the official documentation on that is "to be completed" and I couldn't find anything like a canonical example, so I gathered what I could from looking at the source of the transformation stuff. As the license says, the software is provided "as is" without warranty!

So if you've been thinking "I should keep a short daily journal in a flat-file and publish it to a web-server but I can't find any software to do just that" you now have one less excuse.

Niels Thykier: Volume of debian-release mailing list

28 November, 2014 - 04:38

Page 1 of 5

To be honest, I do not know how many mails it shows “per page” (though I assume it is a fixed number).  So for comparison, I found the month on debian-devel@l.d.o with the  highest volume in the past two years:  May 2013 with “Page 1 of 4“.

I hope you will please forgive us, if we are a bit terse in our replies or slow to reply.  We simply got a lot to deal with.  :)

 


Stefano Zacchiroli: CTTE nomination

28 November, 2014 - 03:43

Apparently, enough fellow developers have been foolish enough to nominate me as a prospective member of the Debian Technical Committee (CTTE), that I've been approached to formally accept/decline the nomination. (Accepted nominees would then go through a selection process and possibly proposed to the DPL for nomination.)

I'm honored by the nominations and I thank the fellow developers that have thrown my name in the hat. But I've respectfully declined the nomination. Given my current involvement in an ongoing attempt to introduce a maximum term limit for CTTE membership, it would have been highly inappropriate for me to accept the nomination at this time.

I have no doubt that the current CTTE and the DPL will fill the empty seats with worthy project members.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้