Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 57 min ago

Dirk Eddelbuettel: tint 0.0.1: Tint Is Not Tufte

4 hours 33 min ago

A new experimental package is now on the ghrr drat. It is named tint which stands for Tint Is Not Tufte. It provides an alternative for Tufte-style html presentation. I wrote a bit more on the package page and the README in the repo -- so go read this.

Here is just a little teaser of what it looks like:

and the full underlying document is available too.

For questions or comments use the issue tracker off the GitHub repo. The package may be short-lived as its functionality may end up inside the tufte package.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Iain R. Learmonth: Azure from Debian

5 hours 43 min ago

Around a week ago, I started to play with programmatically controlling Azure. I needed to create and destroy a bunch of VMs over and over again, and this seemed like something I would want to automate once instead of doing manually and repeatedly. I started to look into the azure-sdk-for-python and mentioned that I wanted to look into this in #debian-python. ardumont from Software Heritage noticed me, and was planning to package azure-storage-python. We joined forces and started a packaging team for Azure-related software.

I spoke with the upstream developer of the azure-sdk-for-python and he pointed me towards azure-cli. It looked to me that this fit my use case better than the SDK alone, as it had the high level commands I was looking for.

Between me and ardumont, in the space of just under a week, we have now packaged: python-msrest (#838121), python-msrestazure (#838122), python-azure (#838101), python-azure-storage (#838135), python-adal (#838716), python-applicationinsights (#838717) and finally azure-cli (#838708). Some of these packages are still in the NEW queue at the time I’m writing this, but I don’t foresee any issues with these packages entering unstable.

azure-cli, as we have packaged, is the new Python-based CLI for Azure. The Microsoft developers gave it the tagline of “our next generation multi-platform command line experience for Azure”. In the short time I’ve been using it I’ve been very impressed with it.

In order to set it up initially, you have to configure a couple of of defaults using az configure. After that, you need to az login which again is an entirely painless process as long as you have a web browser handy in order to perform the login.

After those two steps, you’re only two commands away from deploying a Debian virtual machine:

az resource group create -n testgroup -l "West US"
az vm create -n testvm -g testgroup --image credativ:Debian:8:latest --authentication-type ssh

This will create a resource group, and then create a VM within that resource group with a user automatically created with your current username and with your SSH public key (~/.ssh/id_rsa.pub) automatically installed. Once it returns you the IP address, you can SSH in straight away.

Looking forward to some next steps for Debian on Azure, I’d like to get images built for Azure using vmdebootstrap and I’ll be exploring this in the lead up to, and at, the upcoming vmdebootstrap sprint in Cambridge, UK later in the year (still being organised).

Ritesh Raj Sarraf: Laptop Mode Tools 1.70

24 September, 2016 - 20:55

I'm pleased to announce the release of Laptop Mode Tools, version 1.70. This release adds support for AHCI Runtime PM, introduced in Linux 4.6. It also includes many important bug fixes, mostly related to invocation and determination of power states.

Changelog:

1.70 - Sat Sep 24 16:51:02 IST 2016
    * Deal harder with broken battery states
    * On machines with 2+ batteries, determine states from all batteries
    * Limit status message logging frequency. Some machines tend to send
      ACPI events too often. Thanks Maciej S. Szmigiero
    * Try harder to determine power states. As reports have shown, the
      power_supply subsystem has had incorrect state reporting on many machines,
      for both, BAT and AC.
    * Relax conditional events where Laptop Mode Tools should be executed. This
      affected for use cases of Laptop being docked and undocked
      Thanks Daniel Koch.
    * CPU Hotplug settings extended
    * Cleanup states for improved Laptop Mode Tools invocation
      Thanks: Tomas Janousek
    * Align Intel P State default to what the actual driver (intel_pstate.c)
uses
      Thanks: George Caswell and Matthew Gabeler-Lee
    * Add support for AHCI Runtime PM in module intel-sata-powermgmt
    * Many systemd and initscript fixes
    * Relax default USB device list. This avoids the long standing issues with
      USB devices (mice, keyboard) that mis-behaved during autosuspend

Source tarball, Feodra/SUSE RPM Packages available at:
https://github.com/rickysarraf/laptop-mode-tools/releases

Debian packages will be available soon in Unstable.

Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki
Mailing List: https://groups.google.com/d/forum/laptop-mode-tools
    
 

Categories: Keywords: Like: 

James McCoy: neovim-enters-stretch

24 September, 2016 - 10:01

Last we heard from our fearless hero, Neovim, it was just entering the NEW queue. Well, a few days later it landed in experimental and 8 months, to the day, since then it is now in Stretch.

Enjoy the fish!

Arturo Borrero González: Blog moved from Blogger to Jekyllrb at GithubPages

23 September, 2016 - 14:25

This blog has finally moved away from blogger to jekyll, also changing the hosting and the domain. No new content will be published here.

New coordinates:

This blogger blog will remain as archive, since I don't plan to migrate the content from here to the new blog.
So, see you there!

Jonathan Dowland: WadC 2.1

23 September, 2016 - 03:22

Today I released version 2.1 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

This comes about a year after version 2.0. The most significant change is an adjustment to the line splitting algorithm to fix a long-standing issue when you try to draw a new linedef over the top of an existing one, but in the opposite direction. Now that this bug is fixed, it's much easier to overdraw vertical or horizontal lines without needing an awareness of the direction of the original lines.

The other big changes are in the GUI, which has been cleaned up a fair bit, now had undo/redo support, the initial window size is twice as large, and it now supports internationalisation, with a partial French translation included.

This version is dedicated to the memory of Professor Seymour Papert (1928-2016), co-inventor of the LOGO programming language).

For more information see the release notes and the reference.

Joey Hess: keysafe beta release

23 September, 2016 - 03:13

After a month of development, keysafe 0.20160922 is released, and ready for beta testing. And it needs servers.

With this release, the whole process of backing up and restoring a gpg secret key to keysafe servers is implemented. Keysafe is started at desktop login, and will notice when a gpg secret key has been created, and prompt to see if it should back it up.

At this point, I recommend only using keysafe for lower-value secret keys, for several reasons:

  • There could be some bug that prevents keysafe from restoring a backup.
  • Keysafe's design has not been completely reviewed for security.
  • None of the keysafe servers available so far or planned to be deployed soon meet all of the security requirements for a recommended keysafe server. While server security is only the initial line of defense, it's still important.

Currently the only keysafe server is one that I'm running myself. Two more keysafe servers are needed for keysafe to really be usable, and I can't run those.

If you're interested in running a keysafe server, read the keysafe server requirements and get in touch.

Gustavo Noronha Silva: WebKitGTK+ 2.14 and the Web Engines Hackfest

23 September, 2016 - 00:03

Next week our friends at Igalia will be hosting this year’s Web Engines Hackfest. Collabora will be there! We are gold sponsors, and have three developers attending. It will also be an opportunity to celebrate Igalia’s 15th birthday \o/. Looking forward to meet you there! =)

Carlos Garcia has recently released WebKitGTK+ 2.14, the latest stable release. This is a great release that brings a lot of improvements and works much better on Wayland, which is becoming mature enough to be used by default. In particular, it fixes the clipboard, which was one of the main missing features, thanks to Carlos Garnacho! We have also been able to contribute a bit to this release =)

One of the biggest changes this cycle is the threaded compositor, which was implemented by Igalia’s Gwang Yoon Hwang. This work improves performance by not stalling other web engine features while compositing. Earlier this year we contributed fixes to make the threaded compositor work with the web inspector and fixed elements, helping with the goal of enabling it by default for this release.

Wayland was also lacking an accelerated compositing implementation. There was a patch to add a nested Wayland compositor to the UIProcess, with the WebProcesses connecting to it as Wayland clients to share the final rendering so that it can be shown to screen. It was not ready though and there were questions as to whether that was the way to go and alternative proposals were floating around on how to best implement it.

At last year’s hackfest we had discussions about what the best path for that would be where collaborans Emanuele Aina and Daniel Stone (proxied by Emanuele) contributed quite a bit on figuring out how to implement it in a way that was both efficient and platform agnostic.

We later picked up the old patchset, rebased on the then-current master and made it run efficiently as proof of concept for the Apertis project on a i.MX6 board. This was done using the fancy GL support that landed in GTK+ in the meantime, with some API additions and shortcuts to sidestep performance issues. The work was sponsored by Robert Bosch Car Multimedia.

Igalia managed to improve and land a very well designed patch that implements the nested compositor, though it was still not as efficient as it could be, as it was using glReadPixels to get the final rendering of the page to the GTK+ widget through cairo. I have improved that code by ensuring we do not waste memory when using HiDPI.

As part of our proof of concept investigation, we got this WebGL car visualizer running quite well on our sabrelite imx6 boards. Some of it went into the upstream patches or proposals mentioned below, but we have a bunch of potential improvements still in store that we hope to turn into upstreamable patches and advance during next week’s hackfest.

One of the improvements that already landed was an alternate code path that leverages GTK+’s recent GL super powers to render using gdk_cairo_draw_from_gl(), avoiding the expensive copying of pixels from the GPU to the CPU and making it go faster. That improvement exposed a weird bug in GTK+ that causes a black patch to appear when shrinking the window, which I have a tentative fix for.

We originally proposed to add a new gdk_cairo_draw_from_egl() to use an EGLImage instead of a GL texture or renderbuffer. On our proof of concept we noticed it is even more efficient than the texturing currently used by GTK+, and could give us even better performance for WebKitGTK+. Emanuele Bassi thinks it might be better to add EGLImage as another code branch inside from_gl() though, so we will look into that.

Another very interesting igalian addition to this release is support for the MemoryPressureHandler even on systems with no cgroups set up. The memory pressure handler is a WebKit feature which flushes caches and frees resources that are not being used when the operating system notifies it memory is scarce.

We worked with the Raspberry Pi Foundation to add support for that feature to the Raspberry Pi browser and contributed it upstream back in 2014, when Collabora was trying to squeeze as much as possible from the hardware. We had to add a cgroups setup to wrap Epiphany in, back then, so that it would actually benefit from the feature.

With this improvement, it will benefit even without the custom cgroups setups as well, by having the UIProcess monitor memory usage and notify each WebProcess when memory is tight.

Some of these improvements were achieved by developers getting together at the Web Engines Hackfest last year and laying out the ground work or ideas that ended up in the code base. I look forward to another great few days of hackfest next week! See you there o/

Zlatan Todorić: Open Source Motion Comic Almost Fully Funded - Pledge now!

22 September, 2016 - 18:54

The Pepper and Carrot motion comic is almost funded. The pledge from Ethic Cinema put it on good road (as it seemed it would fail). Ethic Cinema is non profit organization that wants to make open source art (as they call it Libre Art). Purism's creative director, François Téchené, is member and co-founder of Ethic Cinema. Lets push final bits so we can get this free as in freedom artwork.

Notice that Pepper and Carrot is a webcomic (also available as book) free as in freedom artwork done by David Revoy who also supports this campaign. Also the support is done by Krita community on their landing page.

Lets do this!

Junichi Uekawa: Tried creating a GCE control panel for myself.

22 September, 2016 - 10:04
Tried creating a GCE control panel for myself. GCP GCE control panel takes about 20 seconds for me to load, CPU is busy loading the page. It does so many things and it's very complex. I've noticed that the API isn't that slow, so I used OAuth to let me do what I want usually; list the hosts and start/stop the instance, and list the IPs. Takes 500ms to do it instead of 20 seconds. I've put the service on AppEngine. The hardest part was figuring out how this OAuth2 dance was supposed to work, and all the python documentation I have seen were somewhat outdated and rewriting then to a workable state. document was outdated but sample code was fixed. I had to read up on vendoring and PIP and other stuff in order to get all the dependencies installed. I guess my python appengine skills are too rusty now.

C.J. Adams-Collier: virt manager cannot find suitable emulator for x86 64

22 September, 2016 - 01:42

Looks like I was missing qemu-kvm.

$ sudo apt-get install qemu-kvm qemu-system

Matthew Garrett: Microsoft aren't forcing Lenovo to block free operating systems

22 September, 2016 - 00:09
There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.

In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).

Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.

(Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)

Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.

The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

comments

Sean Whitton: Boards in the philosophy dept. yesterday

21 September, 2016 - 07:02

?boards.jpg

?boards2.jpg

Compare

Vincent Sanders: If I see an ending, I can work backward.

21 September, 2016 - 04:12
Now while I am sure Arthur Miller was referring to writing a play when he said those words they have an oddly appropriate resonance for my topic.

In the early nineties Lou Montulli applied the idea of magic cookies to HTTP to make the web stateful, I imagine he had no idea of the issues he was going to introduce for the future. Like most of the web technology it was a solution to an immediate problem which it has never been possible to subsequently improve.

The HTTP cookie is simply a way for a website to identify a connecting browser session so that state can be kept between retrieving pages. Due to shortcomings in the design of cookies and implementation details in browsers this has lead to a selection of unwanted side effects. The specific issue that I am talking about here is the supercookie where the super prefix in this context has similar connotations as to when applied to the word villain.

Whenever the browser requests a resource (web page, image, etc.) the server may return a cookie along with the resource that your browser remembers. The cookie has a domain name associated with it and when your browser requests additional resources if the cookie domain matches the requested resources domain name the cookie is sent along with the request.

As an example the first time you visit a page on www.example.foo.invalid you might receive a cookie with the domain example.foo.invalid so next time you visit a page on www.example.foo.invalid your browser will send the cookie along. Indeed it will also send it along for any page on another.example.foo.invalid

A supercookies is simply one where instead of being limited to one sub-domain (example.foo.invalid) the cookie is set for a top level domain (foo.invalid) so visiting any such domain (I used the invalid name in my examples but one could substitute com or co.uk) your web browser gives out the cookie. Hackers would love to be able to set up such cookies and potentially control and hijack many sites at a time.

This problem was noted early on and browsers were not allowed to set cookie domains with fewer than two parts so example.invalid or example.com were allowed but invalid or com on their own were not. This works fine for top level domains like .com, .org and .mil but not for countries where the domain registrar had rules about second levels like the uk domain (uk domains must have a second level like .co.uk).

There is no way to generate the correct set of top level domains with an algorithm so a database is required and is called the Public Suffix List (PSL). This database is a simple text formatted list with wildcard and inversion syntax and is at time of writing around 180Kb of text including comments which compresses down to 60Kb or so with deflate.

A few years ago with ICANN allowing the great expansion of top level domains the existing NetSurf supercookie handling was found to be wanting and I decided to implement a solution using the PSL. At this point in time the database was only 100Kb source or 40Kb compressed.

I started by looking at limited existing libraries. In fact only the regdom library was adequate but used 150Kb of heap to load the pre-processed list. This would have had the drawback of increasing NetSurf heap usage significantly (we still have users on 8Mb systems). Because of this and the need to run PHP script to generate the pre-processed input it was decided the library was not suitable.

Lacking other choices I came up with my own implementation which used a perl script to construct a tree of domains from the PSL in a static array with the label strings in a separate table. At the time my implementation added 70Kb of read only data which I thought reasonable and allowed for direct lookup of answers from the database.

This solution still required a pre-processing step to generate the C source code but perl is much more readily available, is a language already used by our tooling and we could always simply ship the generated file. As long as the generated file was updated at release time as we already do for our fallback SSL certificate root set this would be acceptable.

I put the solution into NetSurf, was pleased no-one seemed to notice and moved on to other issues. Recently while fixing a completely unrelated issue in the display of session cookies in the management interface and I realised I had some test supercookies present in the display. After the initial "thats odd" I realised with horror there might be a deeper issue.

It quickly became evident the PSL generation was broken and had been for a long time, even worse somewhere along the line the "redundant" empty generated source file had been removed and the ancient fallback code path was all that had been used.

This issue had escalated somewhat from a trivial display problem. I took a moment to asses the situation a bit more broadly and came to the conclusion there were a number of interconnected causes, centered around the lack of automated testing, which could be solved by extracting the PSL handling into a "support" library.

NetSurf has several of these support libraries which could be used separately to the main browser project but are principally oriented towards it. These libraries are shipped and built in releases alongside the main browser codebase and mainly serve to make API more obvious and modular. In this case my main aim was to have the functionality segregated into a separate module which could be tested, updated and monitored directly by our CI system meaning the embarrassing failure I had found can never occur again.

Before creating my own library I did consider a library called libpsl had been created since I wrote my original implementation. Initially I was very interested in using this library given it managed a data representation within a mere 32Kb.

Unfortunately the library integrates a great deal of IDN and punycode handling which was not required in this use case. NetSurf already has to handle IDN and punycode translations and uses punycode encoded domain names internally only translating to unicode representations for display so duplicating this functionality using other libraries requires a great deal of resource above the raw data representation.

I put the library together based on the existing code generator Perl program and integrated the test set that comes along with the PSL. I was a little alarmed to discover that the PSL had almost doubled in size since the implementation was originally written and now the trivial test program of the library was weighing in at a hefty 120Kb.

This stemmed from two main causes:
  1. there were now many more domain label strings to be stored
  2. there now being many, many more nodes in the tree.
To address the first cause the length of the domain label strings was moved into the unused padding space within each tree node removing a byte from each domain label saving 6Kb. Next it occurred to me that while building the domain label string table that if the label to be added already existed as a substring within the table it could be elided.

The domain labels were sorted from longest to shortest and added in order searching for substring matches as the table was built this saved another 6Kb. I am sure there are ways to reduce this further I have missed (if you see them let me know!) but a 25% saving (47Kb to 35Kb) was a good start.

The second cause was a little harder to address. The structure representing nodes in the tree I started with was at first look reasonable.

struct pnode {
uint16_t label_index; /* index into string table of label */
uint16_t label_length; /* length of label */
uint16_t child_node_index; /* index of first child node */
uint16_t child_node_count; /* number of child nodes */
};

I examined the generated table and observed that the majority of nodes were leaf nodes (had no children) which makes sense given the type of data being represented. By allowing two types of node one for labels and a second for the child node information this would halve the node size in most cases and requiring only a modest change to the tree traversal code.

The only issue with this would be that a way to indicate a node has child information. It was realised that the domain labels can have a maximum length of 63 characters meaning their length can be represented in six bits so a uint16_t was excessive. The space was split into two uint8_t parts one for the length and one for a flag to indicate child data node followed.

union pnode {
struct {
uint16_t index; /* index into string table of label */
uint8_t length; /* length of label */
uint8_t has_children; /* the next table entry is a child node */
} label;
struct {
uint16_t node_index; /* index of first child node */
uint16_t node_count; /* number of child nodes */
} child;
};

static const union pnode pnodes[8580] = {
/* root entry */
{ .label = { 0, 0, 1 } }, { .child = { 2, 1553 } },
/* entries 2 to 1794 */
{ .label = {37, 2, 1 } }, { .child = { 1795, 6 } },

...

/* entries 8577 to 8578 */
{ .label = {31820, 6, 1 } }, { .child = { 8579, 1 } },
/* entry 8579 */
{ .label = {0, 1, 0 } },

};

This change reduced the node array size from 63Kb to 33Kb almost a 50% saving. I considered using bitfields to try and reduce the label length and has_children flag into a single byte but such packing will not reduce the length of a node below 32bits because it is unioned with the child structure.

A possibility of using the spare uint8_t derived by bitfield packing to store an additional label node in three other nodes was considered but added a great deal of complexity to node lookup and table construction for saving around 4Kb so was not incorporated.

With the changes incorporated the test program was a much more acceptable 75Kb reasonably close to the size of the compressed source but with the benefits of direct lookup. Integrating the libraries single API call into NetSurf was straightforward and resulted in correct operation when tested.

This episode just reminded me of the dangers of code that can fail silently. It exposed our users to a security problem that we thought had been addressed almost six years ago and squandered the limited resources of the project. Hopefully a lesson we will not have to learn again any time soon. If there is a positive to take away it is that the new implementation is more space efficient, automatically built and importantly tested

Gunnar Wolf: Proposing a GR to repeal the 2005 vote for declassification of the debian-private mailing list

20 September, 2016 - 23:03

For the non-Debian people among my readers: The following post presents bits of the decision-taking process in the Debian project. You might find it interesting, or terribly dull and boring :-) Proceed at your own risk.

My reason for posting this entry is to get more people to read the accompanying options for my proposed General Resolution (GR), and have as full a ballot as possible.

Almost three weeks ago, I sent a mail to the debian-vote mailing list. I'm quoting it here in full:

Some weeks ago, Nicolas Dandrimont proposed a GR for declassifying
debian-private[1]. In the course of the following discussion, he
accepted[2] Don Armstrong's amendment[3], which intended to clarify the
meaning and implementation regarding the work of our delegates and the
powers of the DPL, and recognizing the historical value that could lie
within said list.

[1] https://www.debian.org/vote/2016/vote_002
[2] https://lists.debian.org/debian-vote/2016/07/msg00108.html
[3] https://lists.debian.org/debian-vote/2016/07/msg00078.html

In the process of the discussion, several people objected to the
amended wording, particularly to the fact that "sufficient time and
opportunity" might not be sufficiently bound and defined.

I am, as some of its initial seconders, a strong believer in Nicolas'
original proposal; repealing a GR that was never implemented in the
slightest way basically means the Debian project should stop lying,
both to itself and to the whole free software community within which
it exists, about something that would be nice but is effectively not
implementable.

While Don's proposal is a good contribution, given that in the
aforementioned GR "Further Discussion" won 134 votes against 118, I
hereby propose the following General Resolution:

=== BEGIN GR TEXT ===

Title: Acknowledge that the debian-private list will remain private.

1. The 2005 General Resolution titled "Declassification of debian-private
   list archives" is repealed.
2. In keeping with paragraph 3 of the Debian Social Contract, Debian
   Developers are strongly encouraged to use the debian-private mailing
   list only for discussions that should not be disclosed.

=== END GR TEXT ===

Thanks for your consideration,
--
Gunnar Wolf
(with thanks to Nicolas for writing the entirety of the GR text ;-) )

Yesterday, I spoke with the Debian project secretary, who confirmed my proposal has reached enough Seconds (that is, we have reached five people wanting the vote to happen), so I could now formally do a call for votes. Thing is, there are two other proposals I feel are interesting, and should be part of the same ballot, and both address part of the reasons why the GR initially proposed by Nicolas didn't succeed:

So, once more (and finally!), why am I posting this?

  • To invite Iain to formally propose his text as an option to mine
  • To invite more DDs to second the available options
  • To publicize the ongoing discussion

I plan to do the formal call for votes by Friday 23.

Michal Čihař: wlc 0.6

20 September, 2016 - 23:00

wlc 0.6, a command line utility for Weblate, has been just released. There have been some minor fixes, but the most important news is that Windows and OS X are now supported platforms as well.

Full list of changes:

  • Fixed error when invoked without command.
  • Tested on Windows and OS X (in addition to Linux).

wlc is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version is 2.7 (it is now running on both demo and hosting servers). You can usage examples in the wlc documentation.

Filed under: Debian English SUSE Weblate | 0 comments

Reproducible builds folks: Reproducible Builds: week 73 in Stretch cycle

20 September, 2016 - 19:58

What happened in the Reproducible Builds effort between Sunday September 11 and Saturday September 17 2016:

Toolchain developments

Ximin Luo started a new series of tools called (for now) debrepatch, to make it easier to automate checks that our old patches to Debian packages still apply to newer versions of those packages, and still make these reproducible.

Ximin Luo updated one of our few remaining patches for dpkg in #787980 to make it cleaner and more minimal.

The following tools were fixed to produce reproducible output:

Packages reviewed and fixed, and bugs filed

The following updated packages have become reproducible - in our current test setup - after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

The following 3 packages were not changed, but have become reproducible due to changes in their build-dependencies: jaxrs-api python-lua zope-mysqlda.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

462 package reviews have been added, 524 have been updated and 166 have been removed in this week, adding to our knowledge about identified issues.

25 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (10)
  • Filip Pytloun (1)
  • Santiago Vila (1)
diffoscope development

A new version of diffoscope 60 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

  • Mattia Rizzolo:
    • Various packaging and testing improvements.
  • HW42:
    • minor wording fixes
  • Reiner Herrmann:
    • minor wording fixes
strip-nondeterminism development

New versions of strip-nondeterminism 0.027-1 and 0.028-1 were uploaded to unstable by Chris Lamb. It included contributions from:

  • Chris Lamb:
    • Testing improvements, including better handling of timezones.
disorderfs development

A new version of disorderfs 0.5.1 was uploaded to unstable by Chris Lamb. It included contributions from:

  • Andrew Ayer and Chris Lamb:
    • Support relative paths for ROOTDIR; it no longer needs to be an absolute path.
  • Chris Lamb:
    • Print the behaviour (shuffle/reverse/sort) on startup to stdout.
Misc.

This week's edition was written by Ximin Luo and reviewed by a bunch of Reproducible Builds folks on IRC.

Mike Gabriel: Rocrail changed License to some dodgy non-free non-License

19 September, 2016 - 16:51
The Background Story

A year ago, or so, I took some time to search the internet for Free Software that can be used for controlling model railways via a computer. I was happy to find Rocrail [1] being one of only a few applications available on the market. And even more, I was very happy when I saw that it had been licensed under a Free Software license: GPL-3(+).

A month ago, or so, I collected my old Märklin (Digital) stuff from my parents' place and started looking into it again after +15 years, together with my little son.

Some weeks ago, I remembered Rocrail and thought... Hey, this software was GPLed code and absolutely suitable for uploading to Debian and/or Ubuntu. I searched for the Rocrail source code and figured out that it got hidden from the web some time in 2015 and that the license obviously has been changed to some non-free license (I could not figure out what license, though).

This made me very sad! I thought I had found a piece of software that might be interesting for testing with my model railway. Whenever I stumble over some nice piece of Free Software that I plan to use (or even only play with), I upload this to Debian as one of the first steps. However, I highly attempt to stay away from non-free sofware, so Rocrail has become a no-option for me back in 2015.

I should have moved on from here on...

Instead...

Proactively, I signed up with the Rocrail forum and asked the author(s) if they see any chance of re-licensing the Rocrail code under GPL (or any other FLOSS license) again [2]? When I encounter situations like this, I normally offer my expertise and help with such licensing stuff for free. My impression until here already was that something strange must have happened in the past, so that software developers choose GPL and later on stepped back from that decision and from then on have been hiding the source code from the web entirely.

Going deeper...

The Rocrail project's wiki states that anyone can request GitBlit access via the forum and obtain the source code via Git for local build purposes only. Nice! So, I asked for access to the project's Git repository, which I had been granted. Thanks for that.

Trivial Source Code Investigation...

So far so good. I investigated the source code (well, only the license meta stuff shipped with the source code...) and found that the main COPYING files (found at various locations in the source tree, containing a full version of the GPL-3 license) had been replaced by this text:

Copyright (c) 2002 Robert Jan Versluis, Rocrail.net
All rights reserved.
Commercial usage needs permission.

The replacement happened with these Git commits:

commit cfee35f3ae5973e97a3d4b178f20eb69a916203e
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Fri Jul 17 16:09:45 2015 +0200

    update copyrights

commit df399d9d4be05799d4ae27984746c8b600adb20b
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 14:49:12 2015 +0200

    update licence

commit 0daffa4b8d3dc13df95ef47e0bdd52e1c2c58443
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 10:17:13 2015 +0200

    update
Getting in touch again, still being really interested and wanting to help...

As I consider such a non-license as really dangerous when distributing any sort of software, be it Free or non-free Software, I posted the below text on the Rocrail forum:

Hi Rob,

I just stumbled over this post [3] [link reference adapted for this
blog post), which probably is the one you have referred to above.

It seems that Rocrail contains features that require a key or such
for permanent activation.  Basically, this is allowed and possible
even with the GPL-3+ (although Free Software activists will  not
appreciate that). As the GPL states that people can share the source
code, programmers can  easily deactivate license key checks (and
such) in the code and re-distribute that patchset as they  like.

Furthermore, the current COPYING file is really non-protective at
all. It does not really protect   you as copyright holder of the
code. Meaning, if people crash their trains with your software, you  
could actually be legally prosecuted for that. In theory. Or in the
U.S. ( ;-) ). Main reason for  having a long long license text is to
protect you as the author in case your software causes t trouble to
other people. You do not have any warranty disclaimer in your COPYING
file or elsewhere. Really not a good idea.

In that referenced post above, someone also writes about the nuisance
of license discussions in  this forum. I have seen various cases
where people produced software and did not really care for 
licensing. Some ended with a letter from a lawyer, some with some BIG
company using their code  under their copyright holdership and their
own commercial licensing scheme. This is not paranoia,  this is what
happens in the Free Software world from time to time.

A model that might be much more appropriate (and more protective to
you as the author), maybe, is a  dual release scheme for the code. A
possible approach could be to split Rocrail into two editions:  
Community Edition and Professional/Commercial Edition. The Community
Edition must be licensed in a  way that it allows re-using the code
in a closed-source, non-free version of Rocrail (e.g.   MIT/Expat
License or Apache2.0 License). Thus, the code base belonging to the
community edition  would be licensed, say..., as Apache-2.0 and for
the extra features in the Commercial Edition, you  may use any
non-free license you want (but please not that COPYING file you have
now, it really  does not protect your copyright holdership).

The reason for releasing (a reduced set of features of a) software as
Free Software is to extend  the user base. The honey jar effect, as
practise by many huge FLOSS projects (e.g. Owncloud,  GitLab, etc.).
If people could install Rocrail from the Debian / Ubuntu archives
directly, I am  sure that the user base of Rocrail will increase.
There may also be developers popping up showing  an interest in
Rocrail (e.g. like me). However, I know many FLOSS developers (e.g.
like me) that  won't waste their free time on working for a non-free
piece of software (without being paid).

If you follow (or want to follow) a business model with Rocrail, then
keep some interesting  features in the Commercial Edition and don't
ship that source code. People with deep interest may  opt for that.

Furthermore, another option could be dual licensing the code. As the
copyright holder of Rocrail  you are free to juggle with licenses and
apply any license to a release you want. For example, this  can be
interesing for a free-again Rocrail being shipped via Apple's iStore. 

Last but not least, as you ship the complete source code with all
previous changes as a Git project  to those who request GitBlit
access, it is possible to obtain all earlier versions of Rocrail. In 
the mail I received with my GitBlit credentials, there was some text
that  prohibits publishing the  code. Fine. But: (in theory) it is
not forbidden to share the code with a friend, for local usage.  This
friend finds the COPYING file, frowns and rewinds back to 2015 where
the license was still  GPL-3+. GPL-3+ code can be shared with anyone
and also published, so this friend could upload the  2015-version of
Rocrail to Github or such and start to work on a free fork. You also
may not want  this.

Thanks for working on this piece of software! It is highly
interesing, and I am still sad, that it  does not come with a free
license anymore. I won't continue this discussion and move on, unless
you  are interested in any of the above information and ask for more
expertise. Ping me here or directly  via mail, if needed. If the
expertise leads to parts of Rocrail becoming Free Software again, the 
expertise is offered free of charge ;-).

light+love
Mike
Wow, the first time I got moderated somewhere... What an experience!

This experience now was really new. My post got immediately removed from the forum by the main author of Rocrail (with the forum's moderator's hat on). The new experience was: I got really angry when I discovererd having been moderated. Wow! Really a powerful emotion. No harassment in my words, no secrets disclosed, and still... my free speech got suppressed by someone. That feels intense! And it only occurred in the virtual realm, not face to face. Wow!!! I did not expect such intensity...

The reason for wiping my post without any other communication was given as below and quite a statement to frown upon (this post has also been "moderately" removed from the forum thread [2] a bit later today):

Mike,

I think its not a good idea to point out a way to get the sources back to the GPL periode.
Therefore I deleted your posting.

(The phpBB forum software also allows moderators to edit posts, so the critical passage could have been removed instead, but immediately wiping the full message, well...). Also, just wiping my post and not replying otherwise with some apology to suppress my words, really is a no-go. And the reason for wiping the rest of the text... Any Git user can easily figure out how to get a FLOSS version of Rocrail and continue to work on that from then on. Really.

Now the political part of this blog post...

Fortunately, I still live in an area of the world where the right of free speech is still present. I found out: I really don't like being moderated!!! Esp. if what I share / propose is really noooo secret at all. Anyone who knows how to use Git can come to the same conclusion as I have come to this morning.

[Off-topic, not at all related to Rocrail: The last votes here in Germany indicate that some really stupid folks here yearn for another–this time highly idiotic–wind of change, where free speech may end up as a precious good.]

To other (Debian) Package Maintainers and Railroad Enthusiasts...

With this blog post I probably close the last option for Rocrail going FLOSS again. Personally, I think that gate was already closed before I got in touch.

Now really moving on...

Probably the best approach for my new train conductor hobby (as already recommended by the woman at my side some weeks back) is to leave the laptop lid closed when switching on the train control units. I should have listened to her much earlier.

I have finally removed the Rocrail source code from my computer again without building and testing the application. I neither have shared the source code with anyone. Neither have I shared the Git URL with anyone. I really think that FLOSS enthusiasts should stay away from this software for now. For my part, I have lost my interest in this completely...

References

light+love,
Mike

Gregor Herrmann: RC bugs 2016/37

19 September, 2016 - 04:22

we're not running out of (perl-related) RC bugs. here's my list for this week:

  • #811672 – qt4-perl: "FTBFS with GCC 6: cannot convert x to y"
    add patch from upstream bug tracker, upload to DELAYED/5
  • #815433 – libdata-messagepack-stream-perl: "libdata-messagepack-stream-perl: FTBFS with new msgpack-c library"
    upload new upstream version (pkg-perl)
  • #834249 – src:openbabel: "openbabel: FTBFS in testing"
    propose a patch (build with -std=gnu++98), later upload to DELAYED/2
  • #834960 – src:libdaemon-generic-perl: "libdaemon-generic-perl: FTBFS too much often (failing tests)"
    add patch from ntyni (pkg-perl)
  • #835075 – src:libmail-gnupg-perl: "libmail-gnupg-perl: FTBFS: Failed 1/10 test programs. 0/4 subtests failed."
    upload with patch from dkg (pkg-perl)
  • #835412 – src:libzmq-ffi-perl: "libzmq-ffi-perl: FTBFS too much often, makes sbuild to hang"
    add patch from upstream git (pkg-perl)
  • #835731 – src:libdbix-class-perl: "libdbix-class-perl: FTBFS: Tests failures"
    cherry-pick patch from upstream git (pkg-perl)
  • #837055 – src:fftw: "fftw: FTBFS due to bfnnconv.pl failing to execute m-ascii.pl (. removed from @INC in perl)"
    add patch to call require with "./", upload to DELAYED/2, rescheduled to 0-day on maintainer's request
  • #837221 – src:metacity-themes: "metacity-themes: FTBFS: Can't locate debian/themedata.pm in @INC"
    call helper scripts with "perl -I." in debian/rules, QA upload
  • #837242 – src:jwchat: "jwchat: FTBFS: Can't locate scripts/JWCI18N.pm in @INC"
    add patch to call require with "./", upload to DELAYED/2
  • #837264 – src:libsys-info-base-perl: "libsys-info-base-perl: FTBFS: Couldn't do SPEC: No such file or directory at builder/lib/Build.pm line 42."
    upload with patch from ntyni (pkg-perl)
  • #837284 – src:libimage-info-perl: "libimage-info-perl: FTBFS: Can't locate inc/Module/Install.pm in @INC"
    call perl with -I. in debian/rules, upload to DELAYED/2

Paul Tagliamonte: DNSync

19 September, 2016 - 04:00

While setting up my new network at my house, I figured I’d do things right and set up an IPSec VPN (and a few other fancy bits). One thing that became annoying when I wasn’t on my LAN was I’d have to fiddle with the DNS Resolver to resolve names of machines on the LAN.

Since I hate fiddling with options when I need things to just work, the easiest way out was to make the DNS names actually resolve on the public internet.

A day or two later, some Golang glue, and AWS Route 53, and I wrote code that would sit on my dnsmasq.leases, watch inotify for IN_MODIFY signals, and sync the records to AWS Route 53.

I pushed it up to my GitHub as DNSync.

PRs welcome!

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้