Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 56 min ago

Mario Lang: Upgrading GlusterFS from Wheezy to Stretch

20 December, 2016 - 17:50

We are about to upgrade one of our GlusterFS-based storage systems at work. Fortunately, I was worrying about the upgrade procedure for the Debian packages not being tested by the maintainers. It turns out I was right. Simply upgrading the packages without manual intervention will apparently render your GlusterFS server unusable.

Basic setup

I have only tested the most basic distributed GlusterFS setup. No replication whatsoever. We have two GlusterFS servers, storage1 and storage2. A peering between both has been established, and a very basic volume has been configured:

storage1:~# gluster
gluster> peer status
Number of Peers: 1

Hostname: storage2
Uuid: 2d22cc13-2252-4cf1-bfe9-3d27fa2fbc29
State: Peer in Cluster (Connected)
gluster> volume create data storage1:/srv/data storage2:/srv/data
...
gluster> volume start data
...
gluster> volume info

Volume Name: data
Type: Distribute
Volume ID: e2bd5767-4b33-4e57-9320-91ca76f52d56
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: storage1:/srv/data
Brick2: storage2:/srv/data

For the test setup, I populated the volume with a number of files.

Upgrading from Wheezy to Jessie

To be save, stop the volume before you begin with the package upgrade:

gluster> volume stop data

And now perform your dist-upgrade.

After the upgrade, you will have to perform two manual clean ups. Both actions have to be performed on all storage servers.

/etc/glusterd is now /var/lib/glusterd

The package maintainers have apparently neglected to take care of this one. You manually need to copy the old configuration files over.

storage1:~# cd /var/lib/glusterd && cp -r /etc/glusterd/* .
Put volume-id in extended attribute

GlusterFS 3.5 requires the volume-id in an extended directory attribute. This is also not automatically handled during package upgrade.

storage1:~# vol=data
storage1:~# volid=$(grep volume-id /var/lib/glusterd/vols/$vol/info | cut -d= -f2 | sed 's/-//g')
storage1:~# setfattr -n trusted.glusterfs.volume-id -v 0x$volid /srv/data

With these two steps performed on all GlusterFS servers, you should now be able to start and mount your volume again in Debian Jessie.

Do not forget to explicitly stop the volume again before continueing with the next upgrade step.

Upgrading from Jessie to Stretch

After you have dist-upgraded to Stretch, there is yet another manual step you have to take to convert the volume metadata to the new layout in GlusterFS 3.8. Make sure you have stopped your volumes and the GlusterFS server.

storage1:~# service glusterfs-server stop

Now run the following command:

storage1:~# glusterd --xlator-option *.upgrade=on -N

Now you should be ready to start your volume again:

storage1:~# service glusterfs-server start
storage1:~# gluster
gluster> volume start data

And mount it:

client:~# mount -t glusterfs storage1:/data /mnt

You should now be running GlusterFS 3.8 and your files should still all be there.

Thorsten Glaser: How to use the subtree git merge strategy

20 December, 2016 - 17:36

This article might be perceived as a blatant ripoff of this Linux kernel document, but, on the contrary, it’s intended as add-on, showing how to do a subtree merge (the multi-project merge strategy that’s actually doable in a heterogenous group of developers, as opposed to subprojects, which many just can’t wrap their heads around) with contemporary git (“stupid content tracker”). Furthermore, the commands are reformatted to be easier to copy/paste.

To summarise: you’re on the top level of a checkout of the project into which the “other” project (Bproject) is to be merged. We wish to merge the top level of Bproject’s “master” branch as (newly created) subdirectory “dir-B” under the current project’s top level.

	$ git remote add --no-tags -f Bproject /path/to/B/.git
	$ git merge -s ours --allow-unrelated-histories --no-commit Bproject/master
	$ git read-tree -u --prefix=dir-B/ Bproject/master
	$ git commit -m 'Merge B project as our subdirectory dir-B'
 

(mind the trailing slash after dir-B/ on the read-tree command!)

Besides reformatting, the use of --allow-unrelated-histories recently became necessary.

Mike Hommey: Announcing git-cinnabar 0.4.0 release candidate 2

20 December, 2016 - 16:06

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0rc?
  • /!\ Warning /!\ If you have been using a version of the release branch between 0.4.0rc and 0.4.0rc2 (more precisely, in the range 0335aa1432bdb0a8b5bdbefa98f7c2cd95fc72d2^..0.4.0rc2^), and used git cinnabar download and run on Mac or Windows, please run git cinnabar download again with this version and then ensure your mercurial clones have not been corrupted by case-sensitivity issues by running git cinnabar fsck --manifests. If they contain sha1 mismatches, please reclone.
  • Updated git to 2.11.0 for cinnabar-helper
  • Improvements to the git cinnabar download command
  • Various small code cleanups
  • Improvement to the experimental support for pushing merges

Elizabeth Ferdman: 2 Week Progress Update for PGP Clean Room

20 December, 2016 - 07:00

I’m interning for the PGP Clean Room Project, which aims to code best practices for using GnuPG into the workflow in order to make it easier for users to create and manage their keys. The live disc already contains code for partitioning and mounting the sd cards, and setting up a sample gpg.conf. For my first task, I decided to start building a basic TUI to gather the user’s info and preferences for the primary and secondary encryption keys, as well as for additional subkeys and uids, and then create the keys based on the user’s input. See the code on github.

For the TUI, I decided on whiptail, a command line program that creates user-friendly dialogs. You can see what the whiptail script looks like so far here and run it to see what the TUI looks like.

I also explored different ways to run the gpg2 commands non-interactively. One option is Unattended Key Generation using the –batch option, however that doesn’t allow for multiple subkeys or setting a different expiration date for the subkey than for the primary key.

gpg2 --gen-key --batch gen-key-script

or

gpg2 --expert --full-gen-key --batch gen-key-script

The –command-fd and –command-file options also allow you to bypass the interactive prompt:

echo "uid 1\nprimary\nsave\n" | gpg2 --command-fd 0 --status-fd 2 --edit-key 'Joe Tester'

Newer versions of gpg2 simplify the creation of primary/subkeys and editing of keys with one liners such as:

gpg2 --quick-gen-key 'User Name <user@example.com>' rsa4096 sign 3y

gpg2 --quick-addkey <fingerprint> rsa2048 encrypt 1y

gpg2 --quick-adduid <fingerprint> <primary-uid> <new-uid>

Aside from making life easier for command line users, gpg 2.1.x also supports ECC keys and creates a revocation certificates by default.

Mike Gabriel: [Arctica Project] Release of nx-libs (version 3.5.99.3)

19 December, 2016 - 21:39
Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Monday, Dec 19th, version 3.5.99.3 of nx-libs has been released [1].

This release brings another major backport of libNX_X11 (to the status of X.org's libX11 1.6.4, i.e. latest HEAD) and also a major backport of the xtrans library (status of latest HEAD at X.org, as well). This big chunk of work has again been performed by Ulrich Sibiller. Thanks for your work on this.

This release is also the first version of nx-libs (v3) that has dropped nxcompext as shared library. We discovered that shipping nxcompext as shared library is a big design flaw as it has to be be built against header files private to the Xserver (namely, dix.h). Conclusively, code from nxcompext was moved into the nxagent DDX [2].

Furthermore, we worked again and again on cleaning up the code base. We dropped various files from the Xserver code shipped in nx-libs and various compilier warnings have been amended.

In the upstream ChangeLog you will find some more items around code clean-ups and .deb packaging, see the diff [3] on the ChangeLog file for details.

A very special and massive thanks to all major contributors, namely Ulrich Sibiller, Mihai Moldovan and Vadim Troshchinskiy. Well done!!! Also a special thanks to Vadim Troshchinskiy for fixing some regressions in nxcomp introduced by the Unix socketeering support.

Change Log

A list of recent changes (since 3.5.99.2) can be obtained from here.

Known Issues from 3.5.99.2 (solved in 3.5.99.3)

This version of nx-libs now works fine again with LDFLAGS / CFLAGS having the -pie / -fPIE hardening flags set.

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component).

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. This has been Ubuntu 16.10 so far, but we will soon drop 16.10 support in nightly builds and add 17.04 support.

References

Norbert Preining: I. J. Parker – The Dragon Scroll

19 December, 2016 - 18:32

Very enthralling and entertaining criminal story set in the 11th century Japan, the starting point of a series of novels around Sugawara Akitada (菅原 顕忠), a fictional official/scholar in the Heian period who solves several difficult cases using his great balance of knowledge and common sense.

Akitada is sent to the far north (nowadays around Chiba) to check what has happened to the last three tax convoys that never appeared in the capital. He pokes around and unravels a involved plot to overthrow law and order. A few love stories, dead ends, and lots over wandering around brings the story to a wild finish.

The first book in the Akitada series reads very smoothly and quickly, never boring. It gives nice fews onto the society as imagined by the (scholarly) author, and somehow manages to transfer the feeling of living in this area to the reader.

For those with interest in criminal stories and Japan, it is a very recommendable book.

Chris Lamb: 10 years of Debian

19 December, 2016 - 18:27

Today marks the 10-year anniversary of my first contribution to Debian GNU/Linux.

I will not recount the entire history here but my first experience with Debian was a happy accident: I had sent off for a 5-CD set of Red Hat from The Linux Emporium only to discover I lacked the required 12MB of RAM. Annoyed, I reached for a Debian "potato" CD that was included for free due to it being outdated at the time.

Fast-forwarding a few years, whilst my first contribution was trivial, it was Thomas Bushnell's infectious enthusiasm that led me to contribute more, submitting my first ITP, becoming a Google Summer of Code student under Daniel Baumann, and finally becoming an official Debian Developer in September 2008 with Thomas Viehmann as my Application Manager. (Some things may never change: I still struggle with the bug tracker's control@ interface…)

The response I got to my patch always reminds me of the irrational power of providing attibution. I've always liked to tell myself I'm above such vanities but perhaps the truly mature approach would be to accept that ego is part of the human condition and—as a community—take steps to avoid handicapping ourselves by underestimating the value of such "trivialities" such as having one's name listed.

I've since been fascinated by the number of maintainers who do not attribute patches in changelogs, especially from newcomers or when the changes are non-trivial — a handful in particular have stung me quite deeply.

I would certainly concede that it adds nothing technical and can even be distracting, but it seems a reasonable concession that dramatically increases the chance of future efforts or, frankly, is simply a gesture of thanks and good will. Given our level of technical expertise, I fear we regrettably and regularly suffer from not having sufficient empathy for newcomers or first-time users who lack the context or orientation that we have.

Anyway, here's to another ten…

Hideki Yamane: considering package delta

19 December, 2016 - 12:06

From Android Developers Blog: Saving Data: Reducing the size of App Updates by 65%

We should consider providing delta package, especially update packages from security.debian.org, IMO.

Jonathan McDowell: Timezones + static blog generation

19 December, 2016 - 06:28

So, it turns out when you move to static blog generation and do the generation on your laptop, which is usually in the timezone you’re currently physically located, it can cause URLs to change. Especially if you’re prone to blogging late at night, which can result in even just a shift to DST changing things. I’ve forced jekyll to UTC by adding timezone: 'UTC' to the config, and ensuring all the posts now have timezones for when they were written (a lot of the imported ones didn’t), so hopefully things should be stable from here on.

Iustin Pop: Printer fun

19 December, 2016 - 05:58

Had some printer fun this week. It was fun in the sense that failure modes are interesting, not that there was much joy in the process.

My current document printer is an HP that I bought back in early 2008; soon 9 years old, that is. When I got the printer I was quite happy: it supports Postscript, it supports memory extension (which allowed me to go from the built-in 64MB to a whopping 320MB), it is networked and has automatic duplex. Not good for much more than document printing, but that it did well. I didn't print a lot on it (averaged it was well below the recommended monthly limit), which might explain the total trouble-free operation, but I did change the toner cartridges a couple of times.

The current cartridges were running low for a while, but I didn't need to change them yet. As I printed a user manual at the beginning of the week (~300+ pages in total), I ran out of the black half-way through. Bought a new cartridge, installed it, and the first strange thing was that it still showed “Black empty - please replace”.

I powered the printer off and turned it on again (the miracle cure for all IT-related things), and things seemed OK, so I restarted printing. However, this time, the printer was going through 20-30 pages, and then was getting stuck in "Printing document" with green led blinking. Waited for 20 minutes, nothing. So cancel the job (from the printer), restart printing, all fine.

The next day I wanted to print a single page, and didn't manage to. Checked that the PDF is normal, checked an older PDF which I printed successfully before, nothing worked. Changed drivers, unseated & re-seated the extra memory, changed operating systems, nothing. Not even the built-in printer diagnostic pages were printing.

The internet was all over with "HP formatter issues"; apparently some HP printers had "green" (i.e. low-quality) soldering, and were failing after a while. But people were complaining about 1-2-4 years, not 9 that my printer worked, and it was very suspicious that all troubles started after my cartridge replacement. Or, more likely, due to the recent sudden increase in printing.

Given that formatter board fixes (bake in the oven for N minutes at a specific temperature to reflow the soldering) are temporary and that you can't find replacement parts for this printer, I started looking for a new printer. To my surprise (and dismay at the waste that capitalism produces), a new printer from a higher class was cheaper than replacing all 4 cartridges in my printer. So I had a 90% full black cartridge that I couldn't reuse, but I'd get a new printer for not much more.

Interestingly, in 9 years, the development was:

  • In the series of printers that I had (home office use), one can't get a Ethernet-only networked duplex printer; the M252 series has only an 'n' variant (Ethernet only, no duplex), or 'dw' variant (Ethernet, WiFi, Duplex); if one wants duplex but no WiFi, it's available only in the next series, the M452.
  • The CPU speeds increased 2-3× and memory capacity by 2-4×; however, memory or font expansion is no longer possible.
  • The M252 series still uses Fast Ethernet (which is enough and consumes less power), whereas the M452 series has Gigabit.
  • It seems the cartridges come in two different capacities, but basically colour laser printers still employ the same 4-colour cartridge set (compare to photo printers at 9+).
  • I did just a brief examination of the market, but for home use, it seems the recommendation is still HP for no-troubles use or other brands for cheaper costs. Of course it varies a lot in reviews, but this is what I understood from forums; maybe I'm biased.
  • There was no increase in real resolution; the native grid is still 600dpi (photo inkjet printers are also stuck at 360/720 native for a while), but the ImageRet software processing seems to have advanced (from what the white-papers say).
  • Print speed however has visibly increased; still the same 2-3× increase, but this is wall-clock speed increase, whereas the CPU/memory is less relevant.

I was however happy that one can still get OS-independent (Postscript), networked printers that are small enough for home use and don't (necessarily) come with WiFi.

However, one thing still bothered me: did I have such problems because the printer died of overwork at old age, or was it related to the cartridge change? So I start searching again, and I find a post on a forum (oh Google, why did you remove "forum search" and replaced it with "language level"?) that details a hidden procedure to format the internal storage of the printer, exactly for my printer model, exactly for my symptoms. Huh, I will lose page count, but this is worth a try…

So I do press the required keys, I see the printer booting and saying "erasing…", then asking for language, which makes me happy because it seems the forum post was correct in one regard. I confirm English, the printer reboots once more, and then when it comes up it warns me: "Yellow cartridge is a non-HP original, please confirm". I get confused, and re-seat all cartridges, to no avail. Yellow is non-HP. Sigh, maybe that cartridge had something that confused the printer? When I visit its web page however, all cartridges except the newly installed black one are marked as Non-HP; this only means that I can't see their remaining toner level, but otherwise—the printer is restored back to life. I take the opportunity to also perform a firmware upgrade (only five years newer firmware, but still quite old), but this doesn't solve the Non-HP message.

The printer works now, and I'm left wondering: was this all a DRM-related failure, something like new cartridge chip which had some new code that confused the printer so bad it needed reformatting, at which point the old cartridge code is no longer supported (for whatever reason)? Was it just a fluke, unrelated to DRM? Was the problem that I powered off the printer soon after replacing the cartridge, while it was still doing “something” (e.g. preparing to do a calibration after the change)?

And another, more practical question: I have 3 cartridges to replace still; they were at 10% before this entire saga, and I'm not able to see their level anymore, but they'll get down to empty soon. The black cartridge in the printer is already at 77%, which is surprising as I didn't print that much. So should I replace the cartridges on what is a possibly fully functional, but also possibly a dying printer? Or buy a new one for slightly more, throwing out possibly good hardware?

Even though I understand the business reason behind it, I hate the whole concept of "the printer is free, you pay for the ink". Though in my case "free" didn't mean bad, as a lifetime of 9 years is good enough for a printer.

Daniel Stender: How to cheat setuptools-scm (Debian diary)

19 December, 2016 - 04:56

This is another little issue from Python packaging for Debian which I came across lately packaging the compressed NumPy based data container Bcolz. Upstream uses setuptools-scm to determine the software’s version during build time from the source code management environment the code is in. This method is convenient for the upstream development because with that the version number doesn’t need to be hard-coded, and often people just forget to update that (and other version carrying files like doc/conf.py) when a new version of a project is released.

python-setuptools just needs to be added to the setup.py to do its job, and in Bcolz the code goes like this:

setup(
    name="bcolz",
    use_scm_version={
        'version_scheme': 'guess-next-dev',
        'local_scheme': 'dirty-tag',
        'write_to': 'bcolz/version.py'

The file the version number is written to is bcolz/version.py. This file isn’t in the upstream code revision nor in the tarball which was released by the upstream developers, it’s always generated during build time. In Debian there is an error if you try to build a package from a source tree which contains files which aren’t to be found in the corresponding tarball, like cruft from a previous build, or if any files have changed (therefore every new package should be test build also twice in a row in a non-chroot environment). Generally there a two ways to solve this, either you add cruft to debian/clean, or you add the file resp. a matching file pattern to extend-diff-ignore in debian/source/options. Which method is the better one could be discussed, I’m generally using the clean option if something isn’t in the upstream tarball, and the source/options solution if something is already in the upstream tarball, but gets changed during a build. This is related to your preferred Git procedures, if you remove a file which is in the upstream tarball these removals have to be checked in separately, and that means everytime a new upstream tarball is released – that is not very convenient. Another option which is available is to strip certain files from the upstream tarball by putting them on the Files-Excluded in deb/copyright. By the way, the same complex applies to egg-info/: that folder is shipped or is not shipped in the upstream tarball, and files in that folder get changed during build.

When the source code is put into a Git environment for Debian packaging, there sometimes there are problems with the version number setuptools-scm comes up with. This setuptools extension gets the recent version from the latest Git tag when there is a version number to be found, and that’s all right. In Git environments for Debian packaging (like e.g. of the Debian Science group, the Python groups and the others) that is available, like the commonly used upstream tags have that1. The problem is, sometimes the upstream version which Debian has2 doesn’t match the original upstream version number which is wanted for version in bcolz/version.py. For example, the suffix +ds is used if the upstream tarball has been stripped from prebuild files or embedded convenience shipments (like it’s the case with the Bcolz package where c-blosc/ has been stripped because that’s build for another package), of the suffix “+dfsg” shows that non-DFSG free software has been removed (which can’t be distributed through the main archive section). Thus, the version string for Bcolz which is found after the build currently (1.1.0+ds1-1) is 1.1.0+ds1:

# coding: utf-8
# file generated by setuptools_scm
# don't change, don't track in version control
version = '1.1.0+ds1'

But that’s not wanted because this version never has been released, but appears everywhere:

$ pip list | grep bcolz
bcolz (1.1.0+ds1)

There are several different ways how to fix this. The one “with the crowbar” (like said in German) is to patch use_scm_version out from setup.py, but if you don’t provide any version in exchange the version number which is used by Setuptools then is 0.0.0. The upstream version could be hard-coded into the patch, but then again it has not to be forgotten to update it manually by the maintainer, which is not very convenient. Plus, setup.py could change and the patch then might need to be unfuzzed, thus more work. Bad.

A patch could be spared by manipulating and exporting the SETUPTOOLS_SCM_PRETEND_VERSION environment variable for setuptools-scm in debian/rules, which is sometimes used when I see the returns for that string on Debian Code Search. But how to prevent to hard code the version number, here? The dpkg-dev package (pulled by build-essential) ships a Makefile snippet /usr/share/dpkg/pkg-info.mk which could be included into debian/rules. It defines several variables which are useful for packaging, like DEB_SOURCE contains the source package name. But, DEB_VERSION_UPSTREAM which is available through that puts out the upstream version without epoch and Debian revision, but it’s not getting any finer grained out of the box.

For a custom fix, a regular expression which removes the +... extensions (if present) from the bare upstream version string would be s/\+[^+]*//:

$ echo "1.1.0+ds1" | sed -e 's/\+[^+]*//'
1.1.0
$ echo "1.1.0" | sed -e 's/\+[^+]*//'
1.1.0
$ echo "1.1.0+dfsg12" | sed -e 's/\+[^+]*//'
1.1.0

With that, a custom variable VERSION_UPSTREAM could be set on the top of DEB_VERSION_UPSTREAM (from pkg-info-mk) in debian/rules:

include /usr/share/dpkg/pkg-info.mk
VERSION_UPSTREAM = $(shell echo '$(DEB_VERSION_UPSTREAM)' | sed -e 's/\+[^+]*//')
export SETUPTOOLS_SCM_PRETEND_VERSION = $(VERSION_UPSTREAM)

Bam, that works (see the commit here):

# coding: utf-8
# file generated by setuptools_scm
# don't change, don't track in version control
version = '1.1.0'

An addition, I’ve seen that dh-python also takes care of SETUPTOOLS_SCM_PRETEND_VERSION since 2.20160609. The environment variable is set by the Debhelper build system if python{3,}-setuptools-scm is among the build-dependencies in debian/control. The perl code for that is in dh/pybuild.pm.

  1. For Git in Debian packaging, e.g. see the DEP-14 proposal (Recommended layout for Git packaging repositories): http://dep.debian.net/deps/dep14/ [return]
  2. Following the scheme for package versions “[epoch:]upstream_version[-debian-revision]” [return]

Sean Whitton: progpoking

19 December, 2016 - 04:34

Programming by poking: why MIT stopped teaching SICP

Perhaps there is a case for CS programs keeping pace with workplace technological changes (in addition to developments in the academic field of CS), but it seems sad to deprive undergrads of deeper knowledge about language design.

Dirk Eddelbuettel: RcppArmadillo 0.7.600.1.0

18 December, 2016 - 21:44

Earlier this week, Conrad released Armadillo 7.600.1. The corresponding RcppArmadillo release 0.7.600.1.0 is now on CRAN and in Debian. This follows several of rounds testing at our end with a full reverse-dependency of a pre-release version followed by another full reverse-depency check. Which was of course followed by CRAN testing for two more days.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 298 other packages on CRAN -- an increase of 24 just since the last CRAN release of 0.7.500.0.0 in October!

Changes in this release relative to the previous CRAN release 0.7.500.0.0 are as follows:

Changes in RcppArmadillo version 0.7.600.1.0 (2016-12-16)
  • Upgraded to Armadillo release 7.600.1 (Coup d'Etat Deluxe)

    • more accurate eigs_sym() and eigs_gen()

    • expanded floor(), ceil(), round(), trunc(), sign() to handle sparse matrices

    • added arg(), atan2(), hypot()

Changes in RcppArmadillo version 0.7.500.1.0 (2016-11-11)
  • Upgraded to Armadillo release 7.500.1

  • Small improvement to return value treatment

  • The sample.h extension was updated to the newer Armadillo interface. (Closes #111)

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Mike Gabriel: Free Your Phone, Free Yourself, Get Sponsored for your Work

18 December, 2016 - 20:40

TL;DR; This is a call to every FLOSS developer interested in working towards Free Software driven mobile phones, esp. targetting the Fairphone 2. If your only show stopper is lack of development hardware or lack of financial support, please go on reading below.

As I see it, the upcoming Fairphone 2 will be (or already is) the FLOSS community platform on the mobile devices market. Regularly, I get new notice about people working on this or that OS port to the FP2 hardware platform. The combination of a hardware-wise sustainably maintained mobile phone platform and a Free (or sort-of-Free) operating system being ported to it, makes the Fairphone 2 a really attractive device.

Personally, I run a Sailfish OS on my Fairphone 2. Some weeks ago, I got contacted by one of my sponsors that he got involved in setting up an initiative that works on porting the Ubuntu Table/Phone OS to FP2. That project is in need of more developers. Possibly, it needs exactly YOU!!!

So, if you are a developer that meets one or more of the below requirements and are interested in working in a highly motivated team, please get in touch with the UT2FP [1] project (skill requirements taken from the UT4FP website):

  • Expert knowledge on Android Build System (AOSP / Cyanogenmod);
  • Experience in porting devices to Android;
  • Knowledge of build-up of systems like Ubuntu Touch, SailfishOS, Firefox OS;
  • Knowledge of or understanding the Ubuntu Touch build system and the available manifests: UBports, but also phablet.ubuntu.com;
  • Experience with Git / repo C/C++ experience for (potentially) customizing code• Reverse engineering → Debugging individual components on the basis of logcat, dmesg, syslog, strace (Boot, Graphics, Camera, GPS, Wifi etc.);
  • Debugging build errors and adjusting (Android) Makefiles;
  • Building a devicetree or migrating an existing devicetree for the purpose of a successful build;
  • Knowing where to find which components. (i.e. GitHub, CAF, Vendortrees (blobs));
  • Knowing how to patch a kernel and how to port AppArmor;
  • You know how to document each step and are willing to make all codes and adjustments available.

My sponsor offers to send out FP2 devices to (seriously) interested developers and if needed, he can also back up developers financially. If you are interested, please get in touch with me (and I'll channel you through...) via IRC (on the OFTC or Freenode network).

light+love Mike (aka sunweaver on IRC)

[1] https://www.ut4fp.org/

Bálint Réczey: Hardening Debian Stretch with PIE is ready but bindnow will be missing

18 December, 2016 - 17:28

Hardening all executables by making them position independent by default is basically ready with a few packages to fix (bugs). On the other hand bindnow is not enabled globally (#835146) and it seems it will not be for the next stable release despite my plan :-(.

If you are a maintainer you can still have your packages hardened in Stretch by enabling bindnow per package before 25 December. It could be a nice present for your users!

Johannes Schauer: Looking for self-hosted filesharing software

18 December, 2016 - 17:18

The owncloud package was removed from Debian unstable and testing. I am thus now looking for an alternative. Unfortunately, finding such replacement seems to be harder than I initially thought, even though I only use a very small subset of what owncloud provides. What I require is some software which allows me to:

  1. upload a directory of files of any type to my server (no "distributed" filesharing where I have to stay online with my laptop)
  2. share the content of that directory via HTTP (no requirement to install any additional software other than a web browser)
  3. let the share-links be private (no possibility to infer the location of other shares)
  4. allow users to browse that directory (image thumbnails or a photo gallery would be nice)
  5. allow me to allow anonymous users to upload their own content into that directory (also only requiring their web browser)
  6. already in Debian or easy to package and maintain due to low complexity (I don't have enough time to become the next "owncloud maintainer")

I thought this was a pretty simple task to solve but I am unable to find any software that fits above criteria.

The below table shows the result of my research of what's currently available. The columns mark whether the respective software fulfills one of the six criteria from above.

Software 123456 owncloud ✔✔✔✔✔✘ sparkleshare ✔✘✘✘✘✔ dvcs-autosync ✔✘✘✘✘✔ git annex assistant✔✘✘✘✘✔ syncthing ✔✘✘✘✘✔ pydio ✔✔✔✔✔✘ seafile ✔✔✔✔✔✘ ipfs ✘✘✘✘✘✘ bozon ✔✔✔✘✘✔ droppy ✔✔✔✘✘✔

Pydio and seafile look promising but they seem to be beasts similar in complexity to owncloud as they bring features like version tracking, office integration, wikis, synchronization across multiple devices or online editing of files which I all do not need.

I would already be very happy if there was a script which would make it easy to create a hard-to-guess symlink to a directory with data tracked by git annex under my www-root and then generate some static HTML to provide a thumbnails view or a photo gallery. Unfortunately, even that solution would not be sufficient as it would still disallow public upload by anybody whom I would give the link to...

If you know some software that meets my criteria or would like to submit corrections to above table, please shoot an email to josch@debian.org. Thanks!

Shirish Agarwal: Demonetisation, Indian state and world

18 December, 2016 - 04:19

Queues get longer, patience runs out- Copyright Indian Express Group.

I dunno if people heard or didn’t hear about the demonetisation of INR 500 and INR 1000 which happened in India on 8th November 2016 with new currency designed in India of INR 2000 and INR 500.

What they did was from that moment onwards, paper currency of INR 500 and INR 1000 notes were declared invalid except few places (Government Hospitals, Petrol Pumps, Booking of Air and Train tickets) . The reasons given were as –

a. End of corruption – There is/was suspicion that there are people who have loads of unaccounted wealth which they keep in the form of Cash in hand,

b. Charge against fake/duplicate currency – There is/was suspicion that quite a bit of the money esp, high value notes such as INR 500 and INR 1000, so having made them illegal, people had to hand over cash to banks and fake money would go outside the system.

c. Terror funding – This is related with the above point. There is a popular theory/myth/fact that terrorists use fake money to buy people, arms and ammunition while further devaluing the value of INR against dollar and basket of other high-value currencies that Indian currency follows/bases itself on.

Each of these theories/myths/facts has been contested. Every day we are seeing and reading reports of people being caught with new currency in absurd numbers while RBI , our central bank and Lender of Last Resort has had to play multiple roles such as policing along with the country’s Income Tax Department as well as pumping in new notes of the NEW INR 2000/- and INR 500/- into ATM’s and Bank branches around the country.

Now while the above may seem to be reasonable, there have been multiple factors which has made the whole exercise less effective while implementing –

a. Banking reach – While India does and can boast of somewhat good indicators of banking reach . But –

Quarter of these accounts were opened only in the last couple of years under the ‘Pradhan Mantri Jan Dhana Yojana‘.

There are quite a few limitations of such accounts. It is a good scheme as if you develop a good rapport with a bank and show good credit/debit understanding then there is possibility to move to normal full-fledged bank account.

Almost all of these accounts had zero-balances till the demonetization move.

Many of these accounts are suspected to have been conduits to convert black money to white as the Govt. had said it will not scrutinize small savings bank accounts.

Also many bank accounts historically have laid dormant over the years. One of my first jobs was of a data entry operator in a bank and I used to see hundreds of bank accounts lying dormant for years together. This was in bank digitization in early 90s.

Small Savings accounts would not be scrutinized if they bring upto INR 250000 while Jan Dhan accounts have an upper limit of only INR 50000 .

Even then, it has lead to a huge surge in balances specifically in Zero balances account.

What begs the question is if it is their hard-earned money why hadn’t they deposited money before 8th November 2016.

While I can’t speak about them, I can certainly speak about myself. I hardly keep at the most INR INR 5/10K for medical emergencies in-house for number of years.

Unless you are a businessman who has need of cash or have some function, nobody that I know would keep such amounts in their homes, simply for the possibility of theft in homes.

So how did such people who are not able to open a full-fledged saving account get access to such large amounts?

In most public sector banks, to have a full-fledged savings account the only requirements are –

a. Have INR 500 to 1000 as balance at all times.
b. Have permanent identity and residential proof
c. Two photographs
d. 2-3 people who are account holders who can act as guarantor.

Of the above, b. and d. are probably sticking points for most migrants, while d. may be a sticking point for labourers, craftsman etc. hence the need for that specific scheme.

Which leads to the natural suspicion that they may have been white-washing somebody’s untaxed, unaccounted money which is being put into bank and made into legitimate white money.

People do not have to file an Income Tax Return (ITR) unless they earn more than 250,000 in a single financial year.

One good off-shot of the scheme though is the transparency gained about Bank Mitras

b. Number of banks, quality of Bank services, number of people per bank at least in Nationalized Banks leaves much to be desired. We can’t even try to compare with other BRIC countries, leave alone Germany.

Mobile ATM – Copyright – PTI

One another positive off-shoot has been the introduction of Mobile ATM Vans around the country.

I had experienced such vans in Mumbai since ages, but not anywhere else.

I do hope that both Bank Mitras as well as such ‘Van Mobile ATMs’ happen more. There are huge swathes of people who are currently unbanked. Getting them into the banking infrastructure, getting them to *think* about taking rational financial decisions, i.e. saving and spending, different types of saving etc. should not make citizens and the banking systems more productive and efficient, but hopefully improves our GDP and make it more resilient to any outside financial shocks.

I do have few queries though, one of the countries who is supposed to be a prominent supporter and user of ‘cashless’ society is supposed to be Canada. Could any Canadians (also because debconf is going to happen in Canada in 2017) share how and if they had seen the Canadian banking system evolve in their country ?

Also how much of the country’s economy is cashless i.e used to Electronic Money Transfer and other means (but not cash) and how much is cash, more in day-to-day usage and transactions rather than some website.

Also what, if any charges/commission are paid to the bank for paying via card/electronic money transfer. I ask as India has reduced charges overall to 1% from 2% for making transactions upto INR 2000 in a day.

There has also been recent talk of plastic notes instead of paper currency. Plastic notes are supposed to be more copy-proof and also will work for much longer time. They will not soil as paper notes do.

How have countries been looking at Plastic currencies. I do suspect there would be issues while destroying plastic money vis-a-vis paper currencies.

A sort of interesting discussion that I had with Bernelle before venturing into South Africa was asking her about monetary transactions in SA. She had replied that the highest denomination notes was 200 ZAR which is roughly equal to ( ZAR 200 x 5 = INR 1000) . What is/was interesting that Bernelle told me to be careful and as far as possible not to show 200 ZAR note, whereas in India, even the cheapest worker I have met, they have seen and used INR 1000 note. The context of the discussion was being safe in South Africa and doing transactions with people around as to what works.

It would be curiouser to know how things work in Canada for instance ?

Also has Canada or any other country have experimented with plastic notes. If yes, how has the experience been ?

I would have to say this is in no way a definitive guide of the different impressions and repercussion that the decision and the way it’s playing out even now.

Another thing, while researching for the article there were lots of interesting knowledge, for e.g. the Big Mac Index and it’s limitations which I didn’t know how to integrate into the decision and Policy taken. I also came to know/saw that lots of Policy initiatives being taken by the current (NDA)Government is similar to initiatives taken elsewhere in the world..

Whether the Policy would be fruitful in getting the desired outcome or would it lead to more chaos and down-turn will know in next quarter only.

It would be nice and interesting if people have observed something similar in their country’s economic policies as well.


Filed under: Miscellenous Tagged: #Bank Mitra, #Bank reach, #blackmoney, #debconf17, #Demonetisation, #fake currencies, #full-fledged savings account, #Jan Dhan scheme, #Moile ATM Van, #Plastic money, #Public Sector Banks (PSB), #Reserve Bank of India, Big Mac Index

Russ Allbery: INN 2.6.1

18 December, 2016 - 03:42

(As usual, Julien finished this release a bit back, and then I got busy with life stuff and haven't gotten the announcement out.)

This is a bug-fix and minor feature release over INN 2.6.0. The biggest change is adding support for the new COMPRESS extension. It also fixes various bugs around state changes when negotiating various compression or integrity layers and fixes some issues with nnrpd's validation of newly-posted messages. (Messages with Received and Posted headers are no longer rejected; messages with all-whitespace headers now are.) This release also supports OpenSSL 1.1.0 and fixes an nntpsend bug under systemd.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

You can get the latest version from the official ISC download page or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

Dimitri John Ledkov: Swapfiles by default in Ubuntu

16 December, 2016 - 18:30
4MB RAM cardBy default, in Ubuntu, we usually create a swap partition.

Back in the day of 4MB RAM cards this made total sense, as the ration of RAM to disk space, was still very low. Things have changed since. Server, desktop, embedded systems have migrated to newer generations of both RAM and persistent storage. On the high performance side of things we see machines with faster storage in the form of NVMe and SSD drives. Reserving space for swap on such storage, can be seen as expensive and wasteful. This is also true for recent enough laptops and desktops too. Mobile phones have substantial amounts of RAM these days, and at times, coupled with eMMC storage - it is flash storage of lower performance, which have limited number of write cycles, hence should not be overused for volatile swap data. And there are also unicorns in a form of high performance computing of high memory (shared memory) systems with little or no disk space.

Today, carving a partition and reserving twice the RAM size for swap makes little sense. For a common, general, machine most of the time this swap will not be used at all. Or if said swap space is in use but is of inappropriate size, changing it in-place in retrospect is painful.

Starting from 17.04 Zesty Zapus release, instead of creating swap partitions, swapfiles will be used by default for non-lvm based installations.

Secondly, the sizing of swapfiles is very different. It is no more than 5% of free disk space or 2GiB, whichever is lower.

For preseeding, there are two toggles that control this behavior:
  • d-i partman-swapfile/percentage string 5
  • d-i partman-swapfile/size string 2048
Setting either of those to zero, will result in system without any swap at all. And one can tweak relative integer percentage points and absolute limits in integer percentage points or MiB.
On LVM based installations, swap logical volumes are used, since unfortunately LVM snapshots do not exclude swapfile changes. However, I would like to move partman-auto to respect the above proposed 5%-or-2GB limits.

Ps. 4MB RAM card picture is by Bub's (Photo) [GFDL or CC-BY-SA-3.0], via Wikimedia Commons

Dirk Eddelbuettel: nanotime 0.0.1: New package for Nanosecond Resolution Time for R

16 December, 2016 - 18:28

R has excellent tools for dates and times. The Date and POSIXct classes (as well as the 'wide' representation in POSIXlt) are versatile, and a lot of useful tooling has been built around them.

However, POSIXct is implemented as a double with fractional seconds since the epoch. Given the 53 bits accuracy, it leaves just a bit less than microsecond resolution. Which is good enough for most things.

But more and more performance measurements, latency statistics, ... are now measured more finely, and we need nanosecond resolution. For which commonly an integer64 is used to represent nanoseconds since the epoch.

And while R does not a native type for this, the bit64 package by Jens Oehlschlägel offers a performant one implemented as a lightweight S3 class. So this package uses this integer64 class, along with two helper functions for parsing and formatting, respectively, at nano-second resolution from the RcppCCTZ package which wraps the CCTZ library from Google. CCTZ is a modern C++11 library extending the (C++11-native) chrono type.

Examples Simple Parsing and Arithmetic
R> x <- nanotime("1970-01-01T00:00:00.000000001+00:00")
R> print(x)
integer64
[1] 1
R> format(x)
[1] "1970-01-01T00:00:00.000000001+00:00"
R> cat("x+1 is: ")
x+1 is: R> x <- x + 1
R> print(x)
integer64
[1] 2
R> format(x)
[1] "1970-01-01T00:00:00.000000002+00:00"
R>
Vectorised
R> options("width"=60)
R> v <- nanotime(Sys.time()) + 1:5
R> v
integer64
[1] 1481505724483583001 1481505724483583002
[3] 1481505724483583003 1481505724483583004
[5] 1481505724483583005
R> format(v)
[1] "2016-12-12T01:22:04.483583001+00:00"
[2] "2016-12-12T01:22:04.483583002+00:00"
[3] "2016-12-12T01:22:04.483583003+00:00"
[4] "2016-12-12T01:22:04.483583004+00:00"
[5] "2016-12-12T01:22:04.483583005+00:00"
R> 
Use with zoo
R> z <- zoo(cbind(A=1:5, B=5:1), v)
R> options("nanotimeFormat"="%d %b %H:%M:%E*S")  ## override default
R> z
                          A B
12 Dec 01:47:55.812513001 1 5
12 Dec 01:47:55.812513002 2 4
12 Dec 01:47:55.812513003 3 3
12 Dec 01:47:55.812513004 4 2
12 Dec 01:47:55.812513005 5 1
R> 
Technical Details

The bit64 package (by Jens Oehlschlägel) supplies the integer64 type used to store the nanosecond resolution time as (positive or negative) offsets to the epoch of January 1, 1970. The RcppCCTZ package supplies the formatting and parsing routines based on the (modern C++) library CCTZ from Google.

Status

Version 0.0.1 has now been released. It works with some other packages, notably zoo and data.table.

It (at least currently) requires RcppCCTZ to parse and format nanosecond resolution time objects from / to text --- and this package is on Linux and OS X only due to its use of system time zoneinfo. The requirement could be relaxed in the future by rewriting formating and parsing code. Contributions are welcome.

Installation

The package is not yet on CRAN. Until it gets there, or to install the development versions, it can also be installed via a standard

install.packages("RcppCCTZ")   # need 0.1.0 or later
remotes::install_github("eddelbuettel/nanotime")  

If you prefer install.packages() (as I do), use the version from the ghrr drat:

install.packages("drat")       # easier repo access + creation
drat:::add("ghrr")             # make it known
install.packages("nanotime")   # install it

If and when it gets to CRAN you will be able to do

install.packages("nanotime")
Contact

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้