Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 19 min 50 sec ago

Arturo Borrero González: nftables in Debian Stretch

17 October, 2016 - 20:30

The next Debian stable release is codenamed Stretch, which I would expect to be released in less than a year.

The Netfilter Project has been developing nftables for years now, and the status of the framework has been improved to a good point: it’s ready for wide usage and adoption, even in high-demand production environments.

The last released version of nft was 0.6, and the Debian package was updated just a day after Netfilter announced it.

With the 0.6 version the software freamework reached a good state of maturity, and I myself encourage users to migrate from iptables to nftables.

In case you don’t know about nftables yet, here is a quick reference:

  • it’s the tool/framework meant to replace iptables (also ip6tables, arptables and ebtables)
  • it integrates advanced structures which allow to arrange your ruleset for optimal performance
  • all the system is more configurable than in iptables
  • the syntax is much better than in iptables
  • several actions in a single rule
  • simplified IPv4/IPv6 dual stack
  • less kernel updates required
  • great support for incremental, dynamic and atomic ruleset updates

To run nftables in Debian Stretch you need several components:

  1. nft: the command line interface
  2. libnftnl: the nftables-netlink library
  3. linux kernel: a least 4.7 is recommended

A simple aptitude run will put your system ready to go, out of the box, with nftables:

root@debian:~# aptitude install nftables

Once installed, you can start using the nft command:

root@debian:~# nft list ruleset

A good starting point is to copy a simple workstation firewall configuration:

root@debian:~# cp /usr/share/doc/nftables/examples/syntax/workstation /etc/nftables.conf

And load it:

root@debian:~# nft -f /etc/nftables.conf

You nftables ruleset is now firewalling your network:

root@debian:~# nft list ruleset
table inet filter {
        chain input {
                type filter hook input priority 0;
                iif lo accept
                ct state established,related accept
                ip6 nexthdr icmpv6 icmpv6 type { nd-neighbor-solicit,  nd-router-advert, nd-neighbor-advert } accept
                counter drop
        }
}

Several examples can be found at /usr/share/doc/nftables/examples/.

A simple systemd service is included to load your ruleset at boot time, which is disabled by default.

If you are running Debian Jessie and want to give a try, you can use nftables from jessie-backports.

If you want to migrate your ruleset from iptables to nftables, good news. There are some tools in place to help in the task of translating from iptables to nftables, but that doesn’t belong to this post :-)

The nano editor includes nft syntax highlighting. What are you waiting for to use nftables?

Thomas Lange: FAI 5.2 is going to the cloud

17 October, 2016 - 18:51

The newest version of FAI, the Fully Automatic Installation tool set, now supports creating disk images for virtual machines or for your cloud environment.

The new command fai-diskimage uses the normal FAI process for building disk images of different formats. An image with a small set of packages can be created in less than 50 seconds, a Debian XFCE desktop in nearly two minutes and a complete Ubuntu 16.04 desktop image is created in four minutes.

New FAI installation images for CD and USB stick are also available.

FAI cloud

Jaldhar Vyas: Something Else Will Be Posted Soon Also.

17 October, 2016 - 13:07

Yikes today was Sharad Purnima which means there is about two weeks to go before Diwali and I haven't written anything here all year.

OK new challenge: write 7 substantive blog posts before Diwali. Can I manage to do it? Let's see...

Russell Coker: Improving Memory

17 October, 2016 - 11:20

I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”.

Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember?

For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember).

Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this.

I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life).

When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage.

What We Must Memorise

Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience.

One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially chosen because John Adrian Shepherd-Barron (who is credited with inventing the ATM) was convinced by his wife that 6 digits would be too difficult to memorise. The fact that hardly any banks outside Switzerland use more than 4 digits suggests that Mrs Shepherd-Barron had a point. The fact that this was decided in the 60’s proves that it’s not “digital amnesia”.

We also have to memorise how to use various supposedly user-friendly programs. If you observe an iPhone or Mac being used by someone who hasn’t used one before it becomes obvious that they really aren’t so user friendly and users need to memorise many operations. This is not a criticism of Apple, some tasks are inherently complex and require some complexity of the user interface. The limitations of the basic UI facilities become more obvious when there are operations like palm-swiping the screen for a screen-shot and a double-tap plus drag for a 1 finger zoom on Android.

What else do we need to memorise?

Related posts:

  1. Xen Memory Use and Zope I am currently considering what to do regarding a Zope...
  2. Improving Computer Reliability In a comment on my post about Taxing Inferior Products...
  3. Chilled Memory Attacks In 1996 Peter Gutmann wrote a paper titled “Secure Deletion...

Thomas Goirand: Released OpenStack Newton, Moving OpenStack packages to upstream Gerrit CI/CD

17 October, 2016 - 04:28

OpenStack Newton is released, and uploaded to Sid

OpenStack Newton was released on the Thursday 6th of October. I was able to upload nearly all of it before the week-end, though there was a bit of hick-ups still, as I forgot to upload python-fixtures 3.0.0 to unstable, and only realized it thanks to some bug reports. As this is a build time dependency, it didn’t disrupt Sid users too much, but 38 packages wouldn’t build without it. Thanks to Santiago Vila for pointing at the issue here.

As of writing, a lot of the Newton packages didn’t migrate to Testing yet. It’s been migrating in a very messy way. I’d love to improve this process, but I’m not sure how, if not filling RC bugs against 250 packages (which would be painful to do), so they would migrate at once. Suggestions welcome.

Bye bye Jenkins

For a few years, I was using Jenkins, together with a post-receive hook to build Debian Stable backports of OpenStack packages. Though nearly a year and a half ago, we had that project to build the packages within the OpenStack infrastructure, and use the CI/CD like OpenStack upstream was doing. This is done, and Jenkins is gone, as of OpenStack Newton.

Current status

As of August, almost all of the packages Git repositories were uploaded to OpenStack Gerrit, and the build now happens in OpenStack infrastructure. We’ve been able to build all packages a release OpenStack Newton Debian packages using this system. This non-official jessie backports repository has also been validated using Tempest.

Goodies from Gerrit and upstream CI/CD

It is very nice to have it built this way, so we will be able to maintain a full CI/CD in upstream infrastructure using Newton for the life of Stretch, which means we will have the tools to test security patches virtually forever. Another thing is that now, anyone can propose packaging patches without the need for an Alioth account, by sending a patch for review through Gerrit. It is our hope that this will increase the likeliness of external contribution, for example from 3rd party plugins vendors (ie: networking driver vendors, for example), or upstream contributors themselves. They are already used to Gerrit, and they all expected the packaging to work this way. They are all very much welcome.

The upstream infra: nodepool, zuul and friends

The OpenStack infrastructure has been described already in planet.debian.org, by Ian Wienand. So I wont describe it again, he did a better job than I ever would.

How it works

All source packages are stored in Gerrit with the “deb-” prefix. This is in order to avoid conflict with upstream code, and to easily locate packaging repositories. For example, you’ll find Nova packaging under https://git.openstack.org/cgit/openstack/deb-nova. Two Debian repositories are stored in the infrastructure AFS (Andrew File System, which means a copy of that repository exist on each cloud were we have compute resources): one for the actual deb-* builds, under “jessie-newton”, and one for the automatic backports, maintained in the deb-auto-backports gerrit repository.

We’re using a “git tag” based workflow. Every Gerrit repository contains all of the upstream branch, plus a “debian/newton” branch, which contains the same content as a tag of upstream, plus the debian folder. The orig tarball is generated using “git archive”, then used by sbuild to produce binaries. To package a new upstream release, one simply needs to “git merge -X theirs FOO” (where FOO is the tag you want to merge), then edit debian/changelog so that the Debian package version matches the tag, then do “git commit -a –amend”, and simply “git review”. At this point, the OpenStack CI will build the package. If it builds correctly, then a core reviewer can approve the “merge commit”, the patch is merged, then the package is built and the binary package published on the OpenStack Debian package repository.

Maintaining backports automatically

The automatic backports is maintained through a Gerrit repository called “deb-auto-backports” containing a “packages-list” file that simply lists source packages we need to backport. On each new CR (change request) in Gerrit, thanks to some madison-lite and dpkg –compare-version magic, the packages-list is used to compare what’s in the Debian archive and what we have in the jessie-newton-backports repository. If the version is lower in our repository, or if the package doesn’t exist, then a build is triggered. There is the possibility to backport from any Debian release (using the -d flag in the “packages-list” file), and even we can use jessie-backports to just rebuild the package. I also had to write a hack to just download from jessie-backports without rebuilding, because rebuilding the webkit2gtk package (needed by sphinx) was taking too resources (though we’ll try to never use it, and rebuild packages when possible).

The nice thing with this system, is that we don’t need to care much about maintaining packages up-to-date: the script does that for us.

Upstream Debian repository are NOT for production

The produced package repositories are there because we have interconnected build dependencies, needed to run unit test at build time. It is the only reason why such Debian repository exist. They are not for production use. If you wish to deploy OpenStack, we very much recommend using packages from distributions (like Debian or Ubuntu). Indeed, the infrastructure Debian repositories are updated multiple times daily. As a result, it is very likely that you will experience failures to download (hash or file size mismatch and such). Also, the functional tests aren’t yet wired in the CI/CD in OpenStack infra, and therefore, we cannot guarantee yet that the packages are usable.

Improving the build infrastructure

There’s a bunch of things which we could do to improve the build process. Let me give a list of things we want to do.

  • Get sbuild pre-setup in the Jessie VM images, so we can win 3 minutes per build. This means writing a diskimage-builder element for sbuild.
  • Have the infrastructure use a state-of-the-art Debian ftp-sync mirror, instead of the current reprepro mirroring which produces an unsigned reprository, which we can’t use for sbuild-createchroot. This will improve things a lot, as currently, there’s lots of build failures because of httpredir.debian.org mirror inconsistencies (and these are very frustrating loss of time).
  • For each packaging change, there’s 3 build: the check job, the gate job, and the POST job. This is a loss of time and resources, as we need to build a package once only. It will be hopefully possible to fix this when the OpenStack infra team will deploy Zuul 3.

Generalizing to Debian

During Debconf 16, I had very interesting talks with the DSA (Debian System Administrator) about deploying such a CI/CD for the whole of the Debian archive, interfacing Gerrit with something like dgit and a build CI. I was told that I should provide a proof of concept first, which I very much agreed with. Such a PoC is there now, within OpenStack infra. I very much welcome any Debian contributor to try it, through a packaging patch. If you wish to do so, you should read how to contribute to OpenStack here: https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer and then simply send your patch with “git review”.

This system, however, currently only fits the “git tag” based packaging workflow. We’d have to do a little bit more work to make it possible to use pristine-tar (basically, allow to push in the upstream and pristine-tar branches without any CI job connected to the push).

Dear DSA team, as we now nice PoC that is working well, on which the OpenStack PKG team is maintaining 100s of packages, shall we try to generalize and provide such infrastructure for every packaging team and DDs?

Dirk Eddelbuettel: Rcpp now used by 800 CRAN packages

17 October, 2016 - 02:42

A moment ago, Rcpp hit another milestone: 800 packages on CRAN now depend on it (as measured by Depends, Imports and LinkingTo declarations). The graph is on the left depicts the growth of Rcpp usage over time.

The easiest way to compute this is to use the reverse_dependencies_with_maintainers() function from a helper scripts file on CRAN. This still gets one or false positives of packages declaring a dependency but not actually containing C++ code and the like. There is also a helper function revdep() in the devtools package but it includes Suggests: which does not firmly imply usage, and hence inflates the count. I have always opted for a tighter count with corrections.

Rcpp cleared 300 packages in November 2014. It passed 400 packages in June of last year (when I only tweeted about it), 500 packages less than a year ago in late October, 600 packages this March and 700 packages this July. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of last year, seven percent just before Christmas and eight percent this summer.

800 user packages is staggeringly large and humbling number. This puts more than some responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been.

At the rate we are going, the big 1000 may be hit before we all meet again for useR! 2017.

And with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steinar H. Gunderson: backup.sh opensourced

16 October, 2016 - 20:00

It's been said that backup is a bit like flossing; everybody knows you should do it, but nobody does it.

If you want to start flossing, an immediate question is what kind of dental floss to get—and conversely, for backup, which backup software do you want to rely on? I had some criteria:

  • Automated full-system backup, not just user files.
  • Self-controlled, not cloud (the cloud economics don't really make sense for 10 TB+ of backup storage, especially when you factor in restore cost).
  • Does not require one file on the backup server for one each file on the backed-up server (makes for infinitely long fscks, greatly increased risk of file system corruption, frequently gives performance problems in the backup host, and makes inter-file compression impossible).
  • Not written in Python (makes for glacial speeds).
  • Pull backups, not push (so a backed-up server cannot delete its own backups in event of a break-in).
  • Does not require any special preparation or lots of installation on each server.
  • Ideally, restore using UNIX standard tools only.

I looked at basically everything that existed in Debian and then some, and all of them failed. But Samfundet had its own script that's basically just a simple wrapper around tar and ssh, which has worked for 15+ years without a hitch (including several restores), so why not use it?

All the authors agreed to a GPLv2+ licensing, so now it's time for backup.sh to meet the world. It does about the simplest thing you can imagine: ssh to the server and use GNU tar to tar down every filesystem that has the “dump” bit set in fstab. Every 30 days, it does a full backup; otherwise, it does an incremental backup using GNU tar's incremental mode (which makes sure you will also get information about file deletes). It doesn't do inter-file diffs (so if you have huge files that change only a little bit every day, you'll get blowup), and you can't do single-file restores without basically scanning through all the files; tar isn't random-access. So it doesn't do much fancy, but it works, and it sends you a nice little email every day so you can know your backup went well. (There's also a less frequently used mode where the backed-up server encrypts the backup using GnuPG, so you don't even need to trust the backup server.) It really takes fifteen minutes to set up, so now there's no excuse. :-)

Oh, and the only good dental floss is this one. :-)

Rémi Vanicat: Trying to install Debian on G752VM-GC006T

16 October, 2016 - 16:08

I'm trying to install Debian GNU/linux on my new ASUS G752VM-GC006T

So what I've discovered:

  • It's F2 to have the bios, and in the last bios section, you can directly boot on any device.
  • It boot on the netinst DVD
  • netinst can't see the SSD disk
  • the trackpad doesn't work
  • after successful install, booting on the fresh install failed. I had to use the recovery tools to install nvidia non-free package to have debian successfully boot.
  • I mostly use sid on my computer (mostly to test problem, and report them). It was a bad idea: Debian stopped to find its own disk. adding pci=nomsi to the kernel option fix this.

So I've a working linux. My problem are:

  • I still can't see the SSD disk from linux
  • I cannot easily dualboot:
    • linux can't see the SSD where windows is,
    • windows boot loader don't want to start Debian, because it doesn't want to,
    • at least, the bios can boot both of them, but there is no "pretty" menu
  • the trackpad is not working.
  • 0.5 To feel small today...

And the question is: where to report those bug.

Mirco Bauer: Debian 8 on Dell XPS 15

16 October, 2016 - 10:46

It was time for a new work laptop so I got a Dell XPS 15 9950. I wasn't planning to write a blog post of how to install Debian 8 "Jessie" on the laptop but since it wasn't just install and use, I will share what is needed to get the wifi and graphics card to work.

So first download the DVD-1 AMD64 image of Debian 8 from your favorite download mirror. The closest one for me is the Hong Kong mirror. You do not need to download the other DVDs, just the first one is sufficient. The netinstaller and CD images will not provide a good experience since they need a working network/internet connection. With the DVD image you can do a full default desktop install and most things will just work out-of-the-box.

Now you can do a regular install, no special procedure or anything will be needed. Depending on your desktop selection it will boot right into lovely GNOME3.

You will quickly notice that the wifi is not working out-of-the-box though. It is a Qualcomm Atheros QCA6174 and the Linux kernel version 3.16 shipped with Debian 8 does not support that wifi card. This card needs the ath10k_pci kernel module which is included in a newer Linux kernel package from the Debian backports archive. If you don't have the Dell docking station as neither I do, then there is no wired ethernet that you can use for getting a temporary Internet connection. So use a different computer with Internet access to download the following packages from the Debian backports archive manually and put them on a USB disk.

After that connect the USB disk to the new Dell laptop and mount the disk using the GNOME3 file browser (nautilus). It will mount the USB disk to /media/$your_username/$volume_name. Become root using sudo or su. Then install all downloaded package from USB with like this:

cd /media/$your_username/$volume_name
dpkg -i linux-base_*.deb
dpkg -i linux-image-4.7.0-0.bpo.1-amd64_*.deb
dpkg -i firmware-atheros_*.deb
dpkg -i firmware-misc-nonfree_*.deb
dpkg -i xserver-xorg-video-intel_*.deb

That's it. If dpkg finished without error message then you can reboot and your wifi and graphics card should just work! After reboot you can verify the wifi card is recognized by running "/sbin/iwconfig" and see if wlan0 shows up.

Have fun with your Dell XPS and Debian!

PS: if this does not work for you, leave a comment or write to meebey at meebey . net

Dirk Eddelbuettel: tint 0.0.3: Tint Is Not Tufte

16 October, 2016 - 08:17

The tint package, whose name stands for Tint Is Not Tufte , on CRAN offers a fresh take on the excellent Tufte-style for html and pdf presentations.

It marks a milestone for me: I finally have a repository with more "stars" than commits. Gotta keep an eye on the big prize...

Kidding aside, and as a little teaser, here is what the pdf variant looks like:

This release corrects one minor misfeature in the pdf variant. It also adds some spit and polish throughout, including a new NEWS.Rd file. We quote from it the entries for the current as well as previous releases:

Changes in tint version 0.0.3 (2016-10-15)
  • Correct pdf mode to no use italics in table of contents (#9 fixing #8); also added color support for links etc

  • Added (basic) Travis CI support (#10)

Changes in tint version 0.0.2 (2016-10-06)
  • In html mode, correct use of italics and bold

  • Html function renamed to tintHtml Roboto fonts with (string) formats and locales; this allow for adding formats; (PRs #6 and #7)

  • Added pdf mode with new function tintPdf(); added appropriate resources (PR #5)

  • Updated resource files

Changes in tint version 0.0.1 (2016-09-24)
  • Initial (non-CRAN) release to ghrr drat

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Thorsten Alteholz: DOPOM: libmatthew-java – Unix socket API and bindings for Java

16 October, 2016 - 04:01

While looking at the “action needed”-paragraph of one of my packages, I saw that a dependency was orphaned and needed a new maintainer. So I decided to restart DOPOM (Debian Orphaned Package Of the Month), that I started in 2012 with ent as the first package.

This month I adopted libmatthew-java. Sure it was not a big deal as the QA-team already did a good job and kept the package in shape. But now there is one burden lifted from their shoulders.

According to the Work-Needing and Prospective Packages page 956 packages are ophaned at the moment. If every Debian contributor grabs one of them, we could unwind the QA-team (no, just kidding). So similar to NEW which was down to 0 this year, can we get rid of the WNPP as well? At least for a short time?

Daniel Silverstone: Gitano - Approaching Release - Access Control Changes

15 October, 2016 - 10:11

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects.

In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.

Sub-defines

With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:

define is_steve user exact steve
allow "Steve can read my repo" is_steve op_read

And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:

define is_jeff user exact jeff
define is_steve user exact steve
define readers anyof is_jeff is_steve
allow "Steve and Jeff can read my repo" readers op_read

This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:

allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]

Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:

define readers anyof [user exact jeff] [user exact steve] [user exact susan]
allow "My friends can read my repo" op_read readers

The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY

As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better.

If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:

  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.
No more 'basic' matches

Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/${user}/. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy.

To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is.

This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.

  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano

If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system.

Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

Dirk Eddelbuettel: anytime 0.0.3: Extension and fixes

15 October, 2016 - 07:37

anytime arrived on CRAN with releases 0.0.1 and 0.0.2 about a month ago. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects.

Release 0.0.3 brings a bugfix for Windows (where for dates before the epoch of 1970-01-01, accessing the tm_isdst field for daylight savings would crash the session) and a small (unexported) extension to test format strings. This last feature plays well the ability to add format strings which we added in 0.0.2.

The NEWS file summarises the release:

Changes in Rcpp version 0.0.3 (2016-10-13)
  • Added (non-exported) helper function testFormat()

  • Do not access tm_isdst on Windows for dates before the epoch (pull request #13 fixing issue #12); added test as well

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Michal Čihař: New free software projects on Hosted Weblate

14 October, 2016 - 23:00

Hosted Weblate provides also free hosting for free software projects. I'm quite slow in processing the hosting requests, but when I do that, I process them in a batch and add several projects at once.

This time, the newly hosted projects include:

Filed under: Debian English SUSE Weblate | 0 comments

Michal Čihař: New free software projects on Hosted Weblate

14 October, 2016 - 23:00

Hosted Weblate provides also free hosting for free software projects. I'm quite slow in processing the hosting requests, but when I do that, I process them in a batch and add several projects at once.

This time, the newly hosted projects include:

Filed under: Debian English SUSE Weblate | 0 comments

Mike Gabriel: [Arctica Project] Release of nx-libs (version 3.5.99.2)

14 October, 2016 - 22:47
Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Thursday, Oct 13th, version 3.5.99.2 of nx-libs has been released [1].

This release brings a major backport of libNX_X11 to the status of libX11 1.3.4 (as provided by X.org). On top of that, all CVE fixes provided for libX11 by the Debian X11 Strike Force and the Debian LTS team got cherry-picked to libNX_X11, too. This big chunk of work has been performed by Ulrich Sibiller and there is more to come. We currently have a pull request pending review that backports more commits from libX11 (bumping the status of libNX_X11 to the state of libX11 1.6.4, which is the current HEAD on the X.org Git site).

Another big clean-up performed by Ulrich is the split-up of XKB code which got symlinked between libNX_X11 and nx-X11/programs/Xserver. This brings in some code duplications but allows maintaing the nxagent Xserver code and the libNX_X11 code separately.

In the upstream ChangeLog you will find some more items around code clean-ups and .deb packaging, see the diff [2] on the ChangeLog file for details.

So for this releas, a very special and massive thanks goes to Ulrich Sibiller!!! Well done!!!

Change Log

A list of recent changes (since 3.5.99.1) can be obtained from here.

Known Issues

This version of nx-libs is known to segfault when LDFLAGS / CFLAGS have the -pie / -fPIE hardening flags set. This issue is currently under investigation.

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component).

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. This has been Ubuntu 16.10 so far, but we will soon drop 16.10 support in nightly builds and add 17.04 support.

References

Antoine Beaupré: Managing good bug reports

14 October, 2016 - 22:11

Bug reporting is an art form that is too often neglected in software projects. Bug reports allow contributors to participate without deep technical knowledge and at the same time provide a crucial space for developers to be made aware of issues with their software that they could not have foreseen or found themselves, for lack of resources, variety or imagination.

Prior art

Unfortunately, there are rarely good guidelines for submitting bug reports. Historically, people have pointed towards How to report bugs effectively or How to ask questions the smart way. While those guides can be useful for motivated people and may seem attractive references for project managers, they suffer from serious issues:

  • they are written by technical people, for non-technical people
  • as a result, they have a deeply condescending attitude such as calling people "stupid" or various animal names like "mongoose"
  • they are also very technical themselves: one starts with a copyright notice and a changelog, the other uses magic words like "Core dumps" and $Id$
  • they are too long: sgtatham's is about 3600 words long, esr's is even longer at about 11800 words. those texts will take about 20 to 60 minutes to read by an average reader, according to research

Individual projects have their own guides as well. Linux has the REPORTING_BUGS file that is a much shorter 1200 that can be read under 5 minutes, provided that you can understand the topic at hand. Interestingly, that guide refers to both esr's and sgtatham's guidelines, which means, in the degenerate case where the user hasn't had the "privilege" of reading esr's prose already, they will have an extra hour and a half of reading to do to have honestly followed the guidelines before reporting the bug.

I often find good documentation in the Tails project. Their bug reporting guidelines are easily accessible and quick to read, although they still might be too technical. It could be argued that you need to get technical at some point to get that information out, of course.

In the Monkeysign project, I have started a bug reporting guide that doesn't yet address all those issues. I am considering writing a new guide, but I figured I would look at other people's work and get feedback before writing my own standard.

What's the point?

Why have those documents been written? Are people really expected to read them before seeking help? It seems to me unlikely that someone would:

  1. be motivated enough to do something about a broken part of their computer
  2. figure out they can do something about it
  3. read a fifteen thousand words novel about how to report a bug...
  4. just to finally write a 20-line bug report that has no warranty of support attached to it

And if I would be a paying customer, I wouldn't want to be forced to waste my time reading that prose either: it's your job to help me fix your broken things, not the reverse. As someone doing consulting these days: I totally understand: it's not you, the user, it's us, the developers, that have a problem. We have been socialized through computers, and it makes us weird and obtuse, but that's no excuse, and we need to clean up our act.

Furthermore, it's surprising how often we get (and make!) bug reports that can be difficult to use. The Monkeysign project is very "technical" and I have expected that the bug reports I would get would be well written, with ways to reproduce and so on, but it happened that I received bug reports that were all over the place, didn't have any ways of reproducing or were simply incomplete. Those three bug reports were filed by people that I know to be very technically capable: one is a fellow Debian developer, the second had filed [a good bug report][] 5 days before, and the third one is a contributor that sent good patches before.

In all three cases, they knew what they were doing. Those three people probably read the guidelines mentioned in the past. They may have even read the Monkeysign bug reporting guidelines as well. I can only explain those bug reports by the lack of time: people thought the issue was obvious, that it would get fixed rapidly because, obviously, something is broken.

We need a better way.

The takeaway

What are those guides trying to tell us?

  1. ask questions in the right place
  2. search for similar questions and issues before reporting the bug
  3. try to make the developers reproduce the issues
  4. failing that, try to describe the issue as well as you can
  5. write clearly, be specific and verbose yet concise

There are obviously contradictions in there, like sgtatham telling us to be verbose and esr tells us to, basically, not be verbose. There is definitely a tension in there, and there are many, many more details about how great bug reports can be if done properly.

I tend towards the side of terseness in our descriptions: people that will know how to be concise will be, people that don't will most likely not learn by reading a 12 000 words novel that, in itself, didn't manage to be parsimonious.

But I am willing to allow for verbosity in bug reports: I prefer too many details instead of missing a key bit of information.

Issue trackers

Step 1 is our job: we should send people in the right place, and give them the right tools. Monkeysign used to manage bugs with bugs-everywhere and this turned out to be a terrible idea: you had to understand git and bugs-everywhere to file any bug reports. As a result, there were exactly zero bug reports filed by non-developers during the whole time BE was used, although some bugs were filed in the Debian Bugtracker.

So have a good bug tracker. A mailing list or email address is not a good bug tracker: you lose track of old issues, and it's hard for newcomers to search the archives. It does have the advantage of having a unified interface for the support forum and bug tracking, however.

Redmine, Gitlab, Github and others are all decent-enough bug trackers. The key point is that the issue tracker should be publicly available, and users should be able to register easily to file new issues. You should also be able to mass-edit tickets and users should be able to discover the tracker's features easily. I am sorry to say that the Debian BTS somewhat falls short on those two features.

Step 2 is a shared responsibility: there should be an easy way to search for issues, and we should help the user looking for similar issues. Stackexchange sites do an excellent job at this, by automatically searching for similar questions while you write your question, suggesting similar ones in an attempt to weed out duplicates. Duplicates still happen, but they can then clearly be marked and linked with a distinct mechanism. Most bug trackers do not offer such high level functionality, but should, so I feel the fault lies more on "our" end than at the user's end.

Reproducing the environment

Step 3 and 4 are more or less the user's responsibility. We can detail in our documentation how to clearly share the environment where we reproduced the bug, for example, but in the end, the user decides if they want to share that information or not.

In Monkeysign, I have finally implemented joeyh's suggestion of shipping the test suite with the program. I can now tell people to run the test suite in their environment to see if this is a regression that is specific to their environment - so a known bug, in a way - or a novel bug for which I can look at writing a new unit test. I also include way more information about the environment in the --version output, an idea I brought forward in the borg project to ease debugging. That way, people can just send the output of monkeysign --test and monkeysign --version, and I have a very good overview of what is happening on their end. Of course, Monkeysign also supports the usual --verbose and --debug flag that users should enable when reproducing issues.

Another idea is to report bugs directly from the application. We have all seen Firefox or other software have automatic bug reporting tools, but somehow those seem unsatisfactory for a user: we have no feedback of where the report goes, if it's followed up on. It is useful for larger project to get statistical data, but not so useful for users in the short term.

Monkeysign tries to handle exceptions in the code in a graceful way, but could do better. We use a small library to handle exceptions, but that library has since then been improved to directly file bugs against the Github project. This assumes the user is logged into Github, but it is nice to pre-populate bug reports with the relevant information up front.

Issue templates

In the meantime, to make sure people provide enough information, I have now moved a lot of the bug reporting guidelines to a separate issue template. That issue template is available through the issue creation form now, although it is not enabled by default, a weird limitation of Gitlab. Issue templates are available in Gitlab and Github.

Issue templates somewhat force users in a straight jacket: there is already something to structure their bug report. Those could be distinct form elements that had to be filled in, but I like the flexibility of the template, and the possibility for users to just escape the formalism and just plead for help in their own way.

Issue guidelines

In the end, I opted for a short few paragraphs in the style of the Tails documentation, including a reference to sgtatham, as an optional future reference:

  • Before you report a new bug, review the existing issues in the [online issue tracker][] and the [Debian BTS for Monkeysign][] to make sure the bug has not already been reported elsewhere.

  • The first aim of a bug report is to tell the developers exactly how to reproduce the failure, so try to reproduce the issue yourself and describe how you did that.

  • If that is not possible, try to describe what went wrong in detail. Write down the error messages, especially if they have numbers.

  • Take the necessary time to write clearly and precisely. Say what you mean, and make sure it cannot be misinterpreted.

  • Include the output of monkeysign --test, monkeysign --version and monkeysign --debug in your bug reports. See the issue template for more details about what to include in bug reports.

If you wish to read more about issues regarding communication in bug reports, you can read [How to Report Bugs Effectively][], which takes around 20 to 30 minutes.

[How to Report Bugs Effectively]: http://www.chiark.greenend.org.uk/~sgtatham/bugs.html [Debian BTS for Monkeysign]: https://bugs.debian.org/monkeysign [online issue tracker]: https://0xacab.org/monkeysphere/monkeysign/issues

Unfortunately, short of rewriting sgtatham's guide, I do not feel there is much more we can do as a general guide. I find esr's guide to be too verbose and commanding, so sgtatham it will be for now.

The prose and litteracy

In the end, there is a fundamental issue with reporting bugs: it assumes our users are literate and capable of writing amazing prose that we will enjoy reading as the last J.K. Rowling novel (if you're into that kind of thing). It's just an unreasonable expectation: some of your users don't even speak the same language as you, let alone read or write it. This makes for challenging collaboration, to say the least. This is where automated reporting makes sense: it doesn't require user's intervention, and the communication is mediated by machines without human intervention and their pesky culture.

But we should, as maintainers, "be liberal in what we accept and conservative in what we send". Be tolerant, and help your users in fixing their issues. It's what you are there for, after all.

And in the end, we all fail the same way. In an attempt to improve the situation on bug reporting guides, I seem to have myself written a 2000 short story that will have taken up a hopefully pleasant 10 minutes of your time at minimum. Hopefully I will have succeeded at being clear, specific, verbose and concise all at once and look forward to your feedback on how to improve our bug reporting culture.

Jonathan Dowland: Hi-Fi Furniture

14 October, 2016 - 21:23

sadly obsolete

For the last four years or so, I've had my Hi-Fi and the vast majority of my vinyl collection stored in a self-contained, mildly-customized Ikea unit. Since moving house this has been in my dining room—which we have always referred to as the "play room", since we have a second dining room in which we actually dine.

The intention for the play room was for it to be the room within which all our future children would have their toys kept, in an attempt to keep the living room from being overrun with plastic. The time has thus come for my Hi-Fi to come out of there, so we've moved it to our living room. Unfortunately, there's not enough room in the living room for the Ikea unit: I need something narrower for the space available.

via IkeaHackers.net

In the spirit of my original hack, I started looking at what others might have achieved with Ikea components. There are some great examples of open-style units built out of the (extremely cheap) Lack coffee tables, such as this ikeahackers article, but I'd prefer something more enclosed. One problem I've had with the Expedit unit was my cat trying to scratch the records. I ended up putting framed records at the front to cover the spines of the records within. If I were keeping the unit, I'd look at fitting hinges (another ikeahackers article)

Asides from hacked Ikea stuff, there are a few companies offering traditional enclosed Hi Fi cabinets. I'm going to struggle to fit both the equipment and a subset of records into these, so I might have to look at storing them separately. In some ways that makes life easier: the records could go into a 1x4 Ikea KALLAX unit, leaving the amp and deck to home somewhere. Perhaps I could look at a bigger unit for under the TV.

My parents have a nice Hi-Fi unit that pretends to be a chest of drawers. I'm fairly sure my Dad custom-built it, as it has a hinged top to provide access to the turntable and I haven't seen anything like that on the market.

That brings me onto thinking about other AV things I'd like to achieve in the living room. I've always been interested in exploring surround sound, but my initial attempt in my prior flat did not go well, either because the room was not terribly suited accoustically, or because the Pioneer unit I bought was rubbish, or both. It seems that there aren't really AV receivers which are designed to satisfy both people wanting to use them in a Hi-Fi and a home cinema setting. I could stick to stereo and run the TV into my existing (or a new) amplifier, subject to some logistics around wiring. A previous house owner ran some phono cables under the hard-wood flooring from the TV alcove to the opposite side of the fire place, which might give me some options.

There's also the world of wireless audio, Sonos etcetera. Realistically the majority of my music is digital nowadays, and it would be good to be able to listen to it conveniently in the house. I've heard good reports on the entry level Sonos stuff, but they seem to be Mono, and even the more high-end ones with lots of drivers have very small separation. I did buy a Chromecast Audio on a whim recently, but I haven't looked at it much yet: perhaps that's part of the solution.

So, lots of stuff in the melting pot to figure out here!

Daniel Silverstone: Gitano - Approaching Release - Changes

14 October, 2016 - 20:30

Continuing on from the previous article, here is a (probably incomplete) list of the critical changes to Gitano which have been, or will be, worked on during the run toward a 1.0 release. Each of these will have a blog posting to discuss what the changes mean for current and future users. Sometimes I'll aggregate postings, sometimes I won't.

The following are some highlights from the past little while of development which has been undertaken by Richard and myself. Each item is, I feel, important enough to warrant commentary, even for those who already use Gitano.

  • Lace now supports a sub-define syntax: [foo bar] which makes for simpler rulesets.
  • Gitano no longer creates auto_user_XXX and auto_group_XXX Lace predicates
  • Gitano no longer supports "basic" simple matches of the form user foo but instead requires a match kind such as group prefix bar-.
  • Gitano is gaining i18n/l10n support, though it will not be complete for version 1.0 the basics will be in place.
  • Gitano is gaining a much larger integration test suite using yarn.
  • Deprecated commands have now been removed from Gitano. (e.g. no more set-owner)
  • Gitano has gained PGP/GPG signature verification for commits and tags.

Any number of smaller things have been done which fall below some arbitrary barrier for telling you about. If you're aware of any of them and feel they are worthwhile telling the world about, then please prod me and I'll add an article to the series.

Finally it's worth noting that the effort to get all this into Debian Stretch proceeds apace. Of the eight packages needed, at the time of posting: one was already in and has been updated (luxio), three have been accepted into Debian already (supple, clod, lua-scrypt), two are in NEW (gall and lace), and that leaves the newest library (tongue) and then Gitano itself still to go. The Debian FTP team have been awesome in helping me with all this, so thanks go to them.

Michal Čihař: motranslator 2.0

14 October, 2016 - 11:00

Yesterday, the motranslator 2.0 has been released. As the version change suggests there are some important changes under the hood.

Full list of changes:

  • Consistently use camelCase in API
  • No more relies on using eval()
  • Depends on symfony/expression-language for calculations

As you can see, yesterday announced SimpleMath is not used in the end and I've moved to use existing library. Somehow I misunderstood library description and I thought that it works as PHP, what would be problem for us (or would bring need to add parenthesis around ternary operator as we did with eval()). But this is not the case and ternary operator behaves sane in ExpressionLanguage, so we're good too use it.

Anyway if you were using MoTranslator, it might be good idea to upgrade and check if API changes affect you.

Filed under: Debian English phpMyAdmin | 0 comments

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้