Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 40 min ago

C.J. Adams-Collier: virt manager cannot find suitable emulator for x86 64

22 September, 2016 - 01:42

Looks like I was missing qemu-kvm.

$ sudo apt-get install qemu-kvm qemu-system

Matthew Garrett: Microsoft aren't forcing Lenovo to block free operating systems

22 September, 2016 - 00:09
There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.

In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).

Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.

(Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)

Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.

The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

comments

Sean Whitton: Boards in the philosophy dept. yesterday

21 September, 2016 - 07:02

?boards.jpg

?boards2.jpg

Compare

Vincent Sanders: If I see an ending, I can work backward.

21 September, 2016 - 04:12
Now while I am sure Arthur Miller was referring to writing a play when he said those words they have an oddly appropriate resonance for my topic.

In the early nineties Lou Montulli applied the idea of magic cookies to HTTP to make the web stateful, I imagine he had no idea of the issues he was going to introduce for the future. Like most of the web technology it was a solution to an immediate problem which it has never been possible to subsequently improve.

The HTTP cookie is simply a way for a website to identify a connecting browser session so that state can be kept between retrieving pages. Due to shortcomings in the design of cookies and implementation details in browsers this has lead to a selection of unwanted side effects. The specific issue that I am talking about here is the supercookie where the super prefix in this context has similar connotations as to when applied to the word villain.

Whenever the browser requests a resource (web page, image, etc.) the server may return a cookie along with the resource that your browser remembers. The cookie has a domain name associated with it and when your browser requests additional resources if the cookie domain matches the requested resources domain name the cookie is sent along with the request.

As an example the first time you visit a page on www.example.foo.invalid you might receive a cookie with the domain example.foo.invalid so next time you visit a page on www.example.foo.invalid your browser will send the cookie along. Indeed it will also send it along for any page on another.example.foo.invalid

A supercookies is simply one where instead of being limited to one sub-domain (example.foo.invalid) the cookie is set for a top level domain (foo.invalid) so visiting any such domain (I used the invalid name in my examples but one could substitute com or co.uk) your web browser gives out the cookie. Hackers would love to be able to set up such cookies and potentially control and hijack many sites at a time.

This problem was noted early on and browsers were not allowed to set cookie domains with fewer than two parts so example.invalid or example.com were allowed but invalid or com on their own were not. This works fine for top level domains like .com, .org and .mil but not for countries where the domain registrar had rules about second levels like the uk domain (uk domains must have a second level like .co.uk).

There is no way to generate the correct set of top level domains with an algorithm so a database is required and is called the Public Suffix List (PSL). This database is a simple text formatted list with wildcard and inversion syntax and is at time of writing around 180Kb of text including comments which compresses down to 60Kb or so with deflate.

A few years ago with ICANN allowing the great expansion of top level domains the existing NetSurf supercookie handling was found to be wanting and I decided to implement a solution using the PSL. At this point in time the database was only 100Kb source or 40Kb compressed.

I started by looking at limited existing libraries. In fact only the regdom library was adequate but used 150Kb of heap to load the pre-processed list. This would have had the drawback of increasing NetSurf heap usage significantly (we still have users on 8Mb systems). Because of this and the need to run PHP script to generate the pre-processed input it was decided the library was not suitable.

Lacking other choices I came up with my own implementation which used a perl script to construct a tree of domains from the PSL in a static array with the label strings in a separate table. At the time my implementation added 70Kb of read only data which I thought reasonable and allowed for direct lookup of answers from the database.

This solution still required a pre-processing step to generate the C source code but perl is much more readily available, is a language already used by our tooling and we could always simply ship the generated file. As long as the generated file was updated at release time as we already do for our fallback SSL certificate root set this would be acceptable.

I put the solution into NetSurf, was pleased no-one seemed to notice and moved on to other issues. Recently while fixing a completely unrelated issue in the display of session cookies in the management interface and I realised I had some test supercookies present in the display. After the initial "thats odd" I realised with horror there might be a deeper issue.

It quickly became evident the PSL generation was broken and had been for a long time, even worse somewhere along the line the "redundant" empty generated source file had been removed and the ancient fallback code path was all that had been used.

This issue had escalated somewhat from a trivial display problem. I took a moment to asses the situation a bit more broadly and came to the conclusion there were a number of interconnected causes, centered around the lack of automated testing, which could be solved by extracting the PSL handling into a "support" library.

NetSurf has several of these support libraries which could be used separately to the main browser project but are principally oriented towards it. These libraries are shipped and built in releases alongside the main browser codebase and mainly serve to make API more obvious and modular. In this case my main aim was to have the functionality segregated into a separate module which could be tested, updated and monitored directly by our CI system meaning the embarrassing failure I had found can never occur again.

Before creating my own library I did consider a library called libpsl had been created since I wrote my original implementation. Initially I was very interested in using this library given it managed a data representation within a mere 32Kb.

Unfortunately the library integrates a great deal of IDN and punycode handling which was not required in this use case. NetSurf already has to handle IDN and punycode translations and uses punycode encoded domain names internally only translating to unicode representations for display so duplicating this functionality using other libraries requires a great deal of resource above the raw data representation.

I put the library together based on the existing code generator Perl program and integrated the test set that comes along with the PSL. I was a little alarmed to discover that the PSL had almost doubled in size since the implementation was originally written and now the trivial test program of the library was weighing in at a hefty 120Kb.

This stemmed from two main causes:
  1. there were now many more domain label strings to be stored
  2. there now being many, many more nodes in the tree.
To address the first cause the length of the domain label strings was moved into the unused padding space within each tree node removing a byte from each domain label saving 6Kb. Next it occurred to me that while building the domain label string table that if the label to be added already existed as a substring within the table it could be elided.

The domain labels were sorted from longest to shortest and added in order searching for substring matches as the table was built this saved another 6Kb. I am sure there are ways to reduce this further I have missed (if you see them let me know!) but a 25% saving (47Kb to 35Kb) was a good start.

The second cause was a little harder to address. The structure representing nodes in the tree I started with was at first look reasonable.

struct pnode {
uint16_t label_index; /* index into string table of label */
uint16_t label_length; /* length of label */
uint16_t child_node_index; /* index of first child node */
uint16_t child_node_count; /* number of child nodes */
};

I examined the generated table and observed that the majority of nodes were leaf nodes (had no children) which makes sense given the type of data being represented. By allowing two types of node one for labels and a second for the child node information this would halve the node size in most cases and requiring only a modest change to the tree traversal code.

The only issue with this would be that a way to indicate a node has child information. It was realised that the domain labels can have a maximum length of 63 characters meaning their length can be represented in six bits so a uint16_t was excessive. The space was split into two uint8_t parts one for the length and one for a flag to indicate child data node followed.

union pnode {
struct {
uint16_t index; /* index into string table of label */
uint8_t length; /* length of label */
uint8_t has_children; /* the next table entry is a child node */
} label;
struct {
uint16_t node_index; /* index of first child node */
uint16_t node_count; /* number of child nodes */
} child;
};

static const union pnode pnodes[8580] = {
/* root entry */
{ .label = { 0, 0, 1 } }, { .child = { 2, 1553 } },
/* entries 2 to 1794 */
{ .label = {37, 2, 1 } }, { .child = { 1795, 6 } },

...

/* entries 8577 to 8578 */
{ .label = {31820, 6, 1 } }, { .child = { 8579, 1 } },
/* entry 8579 */
{ .label = {0, 1, 0 } },

};

This change reduced the node array size from 63Kb to 33Kb almost a 50% saving. I considered using bitfields to try and reduce the label length and has_children flag into a single byte but such packing will not reduce the length of a node below 32bits because it is unioned with the child structure.

A possibility of using the spare uint8_t derived by bitfield packing to store an additional label node in three other nodes was considered but added a great deal of complexity to node lookup and table construction for saving around 4Kb so was not incorporated.

With the changes incorporated the test program was a much more acceptable 75Kb reasonably close to the size of the compressed source but with the benefits of direct lookup. Integrating the libraries single API call into NetSurf was straightforward and resulted in correct operation when tested.

This episode just reminded me of the dangers of code that can fail silently. It exposed our users to a security problem that we thought had been addressed almost six years ago and squandered the limited resources of the project. Hopefully a lesson we will not have to learn again any time soon. If there is a positive to take away it is that the new implementation is more space efficient, automatically built and importantly tested

Gunnar Wolf: Proposing a GR to repeal the 2005 vote for declassification of the debian-private mailing list

20 September, 2016 - 23:03

For the non-Debian people among my readers: The following post presents bits of the decision-taking process in the Debian project. You might find it interesting, or terribly dull and boring :-) Proceed at your own risk.

My reason for posting this entry is to get more people to read the accompanying options for my proposed General Resolution (GR), and have as full a ballot as possible.

Almost three weeks ago, I sent a mail to the debian-vote mailing list. I'm quoting it here in full:

Some weeks ago, Nicolas Dandrimont proposed a GR for declassifying
debian-private[1]. In the course of the following discussion, he
accepted[2] Don Armstrong's amendment[3], which intended to clarify the
meaning and implementation regarding the work of our delegates and the
powers of the DPL, and recognizing the historical value that could lie
within said list.

[1] https://www.debian.org/vote/2016/vote_002
[2] https://lists.debian.org/debian-vote/2016/07/msg00108.html
[3] https://lists.debian.org/debian-vote/2016/07/msg00078.html

In the process of the discussion, several people objected to the
amended wording, particularly to the fact that "sufficient time and
opportunity" might not be sufficiently bound and defined.

I am, as some of its initial seconders, a strong believer in Nicolas'
original proposal; repealing a GR that was never implemented in the
slightest way basically means the Debian project should stop lying,
both to itself and to the whole free software community within which
it exists, about something that would be nice but is effectively not
implementable.

While Don's proposal is a good contribution, given that in the
aforementioned GR "Further Discussion" won 134 votes against 118, I
hereby propose the following General Resolution:

=== BEGIN GR TEXT ===

Title: Acknowledge that the debian-private list will remain private.

1. The 2005 General Resolution titled "Declassification of debian-private
   list archives" is repealed.
2. In keeping with paragraph 3 of the Debian Social Contract, Debian
   Developers are strongly encouraged to use the debian-private mailing
   list only for discussions that should not be disclosed.

=== END GR TEXT ===

Thanks for your consideration,
--
Gunnar Wolf
(with thanks to Nicolas for writing the entirety of the GR text ;-) )

Yesterday, I spoke with the Debian project secretary, who confirmed my proposal has reached enough Seconds (that is, we have reached five people wanting the vote to happen), so I could now formally do a call for votes. Thing is, there are two other proposals I feel are interesting, and should be part of the same ballot, and both address part of the reasons why the GR initially proposed by Nicolas didn't succeed:

So, once more (and finally!), why am I posting this?

  • To invite Iain to formally propose his text as an option to mine
  • To invite more DDs to second the available options
  • To publicize the ongoing discussion

I plan to do the formal call for votes by Friday 23.

Michal Čihař: wlc 0.6

20 September, 2016 - 23:00

wlc 0.6, a command line utility for Weblate, has been just released. There have been some minor fixes, but the most important news is that Windows and OS X are now supported platforms as well.

Full list of changes:

  • Fixed error when invoked without command.
  • Tested on Windows and OS X (in addition to Linux).

wlc is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version is 2.7 (it is now running on both demo and hosting servers). You can usage examples in the wlc documentation.

Filed under: Debian English SUSE Weblate | 0 comments

Reproducible builds folks: Reproducible Builds: week 73 in Stretch cycle

20 September, 2016 - 19:58

What happened in the Reproducible Builds effort between Sunday September 11 and Saturday September 17 2016:

Toolchain developments

Ximin Luo started a new series of tools called (for now) debrepatch, to make it easier to automate checks that our old patches to Debian packages still apply to newer versions of those packages, and still make these reproducible.

Ximin Luo updated one of our few remaining patches for dpkg in #787980 to make it cleaner and more minimal.

The following tools were fixed to produce reproducible output:

Packages reviewed and fixed, and bugs filed

The following updated packages have become reproducible - in our current test setup - after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

The following 3 packages were not changed, but have become reproducible due to changes in their build-dependencies: jaxrs-api python-lua zope-mysqlda.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

462 package reviews have been added, 524 have been updated and 166 have been removed in this week, adding to our knowledge about identified issues.

25 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (10)
  • Filip Pytloun (1)
  • Santiago Vila (1)
diffoscope development

A new version of diffoscope 60 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

  • Mattia Rizzolo:
    • Various packaging and testing improvements.
  • HW42:
    • minor wording fixes
  • Reiner Herrmann:
    • minor wording fixes
strip-nondeterminism development

New versions of strip-nondeterminism 0.027-1 and 0.028-1 were uploaded to unstable by Chris Lamb. It included contributions from:

  • Chris Lamb:
    • Testing improvements, including better handling of timezones.
disorderfs development

A new version of disorderfs 0.5.1 was uploaded to unstable by Chris Lamb. It included contributions from:

  • Andrew Ayer and Chris Lamb:
    • Support relative paths for ROOTDIR; it no longer needs to be an absolute path.
  • Chris Lamb:
    • Print the behaviour (shuffle/reverse/sort) on startup to stdout.
Misc.

This week's edition was written by Ximin Luo and reviewed by a bunch of Reproducible Builds folks on IRC.

Mike Gabriel: Rocrail changed License to some dodgy non-free non-License

19 September, 2016 - 16:51
The Background Story

A year ago, or so, I took some time to search the internet for Free Software that can be used for controlling model railways via a computer. I was happy to find Rocrail [1] being one of only a few applications available on the market. And even more, I was very happy when I saw that it had been licensed under a Free Software license: GPL-3(+).

A month ago, or so, I collected my old Märklin (Digital) stuff from my parents' place and started looking into it again after +15 years, together with my little son.

Some weeks ago, I remembered Rocrail and thought... Hey, this software was GPLed code and absolutely suitable for uploading to Debian and/or Ubuntu. I searched for the Rocrail source code and figured out that it got hidden from the web some time in 2015 and that the license obviously has been changed to some non-free license (I could not figure out what license, though).

This made me very sad! I thought I had found a piece of software that might be interesting for testing with my model railway. Whenever I stumble over some nice piece of Free Software that I plan to use (or even only play with), I upload this to Debian as one of the first steps. However, I highly attempt to stay away from non-free sofware, so Rocrail has become a no-option for me back in 2015.

I should have moved on from here on...

Instead...

Proactively, I signed up with the Rocrail forum and asked the author(s) if they see any chance of re-licensing the Rocrail code under GPL (or any other FLOSS license) again [2]? When I encounter situations like this, I normally offer my expertise and help with such licensing stuff for free. My impression until here already was that something strange must have happened in the past, so that software developers choose GPL and later on stepped back from that decision and from then on have been hiding the source code from the web entirely.

Going deeper...

The Rocrail project's wiki states that anyone can request GitBlit access via the forum and obtain the source code via Git for local build purposes only. Nice! So, I asked for access to the project's Git repository, which I had been granted. Thanks for that.

Trivial Source Code Investigation...

So far so good. I investigated the source code (well, only the license meta stuff shipped with the source code...) and found that the main COPYING files (found at various locations in the source tree, containing a full version of the GPL-3 license) had been replaced by this text:

Copyright (c) 2002 Robert Jan Versluis, Rocrail.net
All rights reserved.
Commercial usage needs permission.

The replacement happened with these Git commits:

commit cfee35f3ae5973e97a3d4b178f20eb69a916203e
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Fri Jul 17 16:09:45 2015 +0200

    update copyrights

commit df399d9d4be05799d4ae27984746c8b600adb20b
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 14:49:12 2015 +0200

    update licence

commit 0daffa4b8d3dc13df95ef47e0bdd52e1c2c58443
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 10:17:13 2015 +0200

    update
Getting in touch again, still being really interested and wanting to help...

As I consider such a non-license as really dangerous when distributing any sort of software, be it Free or non-free Software, I posted the below text on the Rocrail forum:

Hi Rob,

I just stumbled over this post [3] [link reference adapted for this
blog post), which probably is the one you have referred to above.

It seems that Rocrail contains features that require a key or such
for permanent activation.  Basically, this is allowed and possible
even with the GPL-3+ (although Free Software activists will  not
appreciate that). As the GPL states that people can share the source
code, programmers can  easily deactivate license key checks (and
such) in the code and re-distribute that patchset as they  like.

Furthermore, the current COPYING file is really non-protective at
all. It does not really protect   you as copyright holder of the
code. Meaning, if people crash their trains with your software, you  
could actually be legally prosecuted for that. In theory. Or in the
U.S. ( ;-) ). Main reason for  having a long long license text is to
protect you as the author in case your software causes t trouble to
other people. You do not have any warranty disclaimer in your COPYING
file or elsewhere. Really not a good idea.

In that referenced post above, someone also writes about the nuisance
of license discussions in  this forum. I have seen various cases
where people produced software and did not really care for 
licensing. Some ended with a letter from a lawyer, some with some BIG
company using their code  under their copyright holdership and their
own commercial licensing scheme. This is not paranoia,  this is what
happens in the Free Software world from time to time.

A model that might be much more appropriate (and more protective to
you as the author), maybe, is a  dual release scheme for the code. A
possible approach could be to split Rocrail into two editions:  
Community Edition and Professional/Commercial Edition. The Community
Edition must be licensed in a  way that it allows re-using the code
in a closed-source, non-free version of Rocrail (e.g.   MIT/Expat
License or Apache2.0 License). Thus, the code base belonging to the
community edition  would be licensed, say..., as Apache-2.0 and for
the extra features in the Commercial Edition, you  may use any
non-free license you want (but please not that COPYING file you have
now, it really  does not protect your copyright holdership).

The reason for releasing (a reduced set of features of a) software as
Free Software is to extend  the user base. The honey jar effect, as
practise by many huge FLOSS projects (e.g. Owncloud,  GitLab, etc.).
If people could install Rocrail from the Debian / Ubuntu archives
directly, I am  sure that the user base of Rocrail will increase.
There may also be developers popping up showing  an interest in
Rocrail (e.g. like me). However, I know many FLOSS developers (e.g.
like me) that  won't waste their free time on working for a non-free
piece of software (without being paid).

If you follow (or want to follow) a business model with Rocrail, then
keep some interesting  features in the Commercial Edition and don't
ship that source code. People with deep interest may  opt for that.

Furthermore, another option could be dual licensing the code. As the
copyright holder of Rocrail  you are free to juggle with licenses and
apply any license to a release you want. For example, this  can be
interesing for a free-again Rocrail being shipped via Apple's iStore. 

Last but not least, as you ship the complete source code with all
previous changes as a Git project  to those who request GitBlit
access, it is possible to obtain all earlier versions of Rocrail. In 
the mail I received with my GitBlit credentials, there was some text
that  prohibits publishing the  code. Fine. But: (in theory) it is
not forbidden to share the code with a friend, for local usage.  This
friend finds the COPYING file, frowns and rewinds back to 2015 where
the license was still  GPL-3+. GPL-3+ code can be shared with anyone
and also published, so this friend could upload the  2015-version of
Rocrail to Github or such and start to work on a free fork. You also
may not want  this.

Thanks for working on this piece of software! It is highly
interesing, and I am still sad, that it  does not come with a free
license anymore. I won't continue this discussion and move on, unless
you  are interested in any of the above information and ask for more
expertise. Ping me here or directly  via mail, if needed. If the
expertise leads to parts of Rocrail becoming Free Software again, the 
expertise is offered free of charge ;-).

light+love
Mike
Wow, the first time I got moderated somewhere... What an experience!

This experience now was really new. My post got immediately removed from the forum by the main author of Rocrail (with the forum's moderator's hat on). The new experience was: I got really angry when I discovererd having been moderated. Wow! Really a powerful emotion. No harassment in my words, no secrets disclosed, and still... my free speech got suppressed by someone. That feels intense! And it only occurred in the virtual realm, not face to face. Wow!!! I did not expect such intensity...

The reason for wiping my post without any other communication was given as below and quite a statement to frown upon (this post has also been "moderately" removed from the forum thread [2] a bit later today):

Mike,

I think its not a good idea to point out a way to get the sources back to the GPL periode.
Therefore I deleted your posting.

(The phpBB forum software also allows moderators to edit posts, so the critical passage could have been removed instead, but immediately wiping the full message, well...). Also, just wiping my post and not replying otherwise with some apology to suppress my words, really is a no-go. And the reason for wiping the rest of the text... Any Git user can easily figure out how to get a FLOSS version of Rocrail and continue to work on that from then on. Really.

Now the political part of this blog post...

Fortunately, I still live in an area of the world where the right of free speech is still present. I found out: I really don't like being moderated!!! Esp. if what I share / propose is really noooo secret at all. Anyone who knows how to use Git can come to the same conclusion as I have come to this morning.

[Off-topic, not at all related to Rocrail: The last votes here in Germany indicate that some really stupid folks here yearn for another–this time highly idiotic–wind of change, where free speech may end up as a precious good.]

To other (Debian) Package Maintainers and Railroad Enthusiasts...

With this blog post I probably close the last option for Rocrail going FLOSS again. Personally, I think that gate was already closed before I got in touch.

Now really moving on...

Probably the best approach for my new train conductor hobby (as already recommended by the woman at my side some weeks back) is to leave the laptop lid closed when switching on the train control units. I should have listened to her much earlier.

I have finally removed the Rocrail source code from my computer again without building and testing the application. I neither have shared the source code with anyone. Neither have I shared the Git URL with anyone. I really think that FLOSS enthusiasts should stay away from this software for now. For my part, I have lost my interest in this completely...

References

light+love,
Mike

Gregor Herrmann: RC bugs 2016/37

19 September, 2016 - 04:22

we're not running out of (perl-related) RC bugs. here's my list for this week:

  • #811672 – qt4-perl: "FTBFS with GCC 6: cannot convert x to y"
    add patch from upstream bug tracker, upload to DELAYED/5
  • #815433 – libdata-messagepack-stream-perl: "libdata-messagepack-stream-perl: FTBFS with new msgpack-c library"
    upload new upstream version (pkg-perl)
  • #834249 – src:openbabel: "openbabel: FTBFS in testing"
    propose a patch (build with -std=gnu++98), later upload to DELAYED/2
  • #834960 – src:libdaemon-generic-perl: "libdaemon-generic-perl: FTBFS too much often (failing tests)"
    add patch from ntyni (pkg-perl)
  • #835075 – src:libmail-gnupg-perl: "libmail-gnupg-perl: FTBFS: Failed 1/10 test programs. 0/4 subtests failed."
    upload with patch from dkg (pkg-perl)
  • #835412 – src:libzmq-ffi-perl: "libzmq-ffi-perl: FTBFS too much often, makes sbuild to hang"
    add patch from upstream git (pkg-perl)
  • #835731 – src:libdbix-class-perl: "libdbix-class-perl: FTBFS: Tests failures"
    cherry-pick patch from upstream git (pkg-perl)
  • #837055 – src:fftw: "fftw: FTBFS due to bfnnconv.pl failing to execute m-ascii.pl (. removed from @INC in perl)"
    add patch to call require with "./", upload to DELAYED/2, rescheduled to 0-day on maintainer's request
  • #837221 – src:metacity-themes: "metacity-themes: FTBFS: Can't locate debian/themedata.pm in @INC"
    call helper scripts with "perl -I." in debian/rules, QA upload
  • #837242 – src:jwchat: "jwchat: FTBFS: Can't locate scripts/JWCI18N.pm in @INC"
    add patch to call require with "./", upload to DELAYED/2
  • #837264 – src:libsys-info-base-perl: "libsys-info-base-perl: FTBFS: Couldn't do SPEC: No such file or directory at builder/lib/Build.pm line 42."
    upload with patch from ntyni (pkg-perl)
  • #837284 – src:libimage-info-perl: "libimage-info-perl: FTBFS: Can't locate inc/Module/Install.pm in @INC"
    call perl with -I. in debian/rules, upload to DELAYED/2

Paul Tagliamonte: DNSync

19 September, 2016 - 04:00

While setting up my new network at my house, I figured I’d do things right and set up an IPSec VPN (and a few other fancy bits). One thing that became annoying when I wasn’t on my LAN was I’d have to fiddle with the DNS Resolver to resolve names of machines on the LAN.

Since I hate fiddling with options when I need things to just work, the easiest way out was to make the DNS names actually resolve on the public internet.

A day or two later, some Golang glue, and AWS Route 53, and I wrote code that would sit on my dnsmasq.leases, watch inotify for IN_MODIFY signals, and sync the records to AWS Route 53.

I pushed it up to my GitHub as DNSync.

PRs welcome!

Eriberto Mota: Statistics to Choose a Debian Package to Help

18 September, 2016 - 10:30

In the last week I played a bit with UDD (Ultimate Debian Database). After some experiments I did a script to generate a daily report about source packages in Debian. This report is useful to choose a package that needs help.

The daily report has six sections:

  • Sources in Debian Sid (including orphan)
  • Sources in Debian Sid (only Orphan, RFA or RFH)
  • Top 200 sources in Debian Sid with outdated Standards-Version
  • Top 200 sources in Debian Sid with NMUs
  • Top 200 sources in Debian Sid with BUGs
  • Top 200 sources in Debian Sid with RC BUGs

The first section has several important data about all source packages in Debian, ordered by last upload to Sid. It is very useful to see packages without revisions for a long time. Other interesting data about each package are Standards-Version, packaging format, number of NMUs, among others. Believe it or not, there are packages uploaded to Sid for the last time 2003! (seven packages)

With the report, you can choose a ideal package to do QA uploads, NMUs or to adopt.

Well, if you like to review packages, this report is for you: https://people.debian.org/~eriberto/eriberto_stats.html. Enjoy!

 

Norbert Preining: Fixing packages for broken Gtk3

18 September, 2016 - 09:31

As mentioned on sunweaver’s blog Debian’s GTK-3+ v3.21 breaks Debian MATE 1.14, Gtk3 is breaking apps all around. But not only Mate, probably many other apps are broken, too, in particular Nemo (the file manager of Cinnamon desktop) has redraw issues (bug 836908), and regular crashes (bug 835043).

I have prepared packages for mate-terminal and nemo built from the most recent git sources. The new mate-terminal now does not crash anymore on profile changes (bug 835188), and the nemo redraw issues are gone. Unfortunately, the other crashes of nemo are still there. The apt-gettable repository with sources and amd64 binaries are here:

deb http://www.preining.info/debian/ gtk3fixes main
deb-src http://www.preining.info/debian/ gtk3fixes main

and are signed with my usual GPG key.

Last but not least, I quote from sunweaver’s blog:

Questions
  1. Isn’t GTK-3+ a shared library? This one was rhetorical… Yes, it is.
  2. One that breaks other application with every point release? Well, unfortunately, as experience over the past years has shown: Yes, this has happened several times, so far — and it happened again.
  3. Why is it that GTK-3+ uploads appear in Debian without going through a proper transition? This question is not rhetorical. If someone has an answer, please enlighten me.

(end of quote)

<rant>
My personal answer to this is: Gtk is strongly related to Gnome, Gnome is strongly related to SystemD, all this is pushed onto Debian users in the usual way of “we don’t care for breaking non-XXX apps” (for XXX in Gnome, SystemD). It is very sad to see this recklessness taking more and more space all over Debian.
</rant>

I finish with another quote from sunweaver’s blog:

already scared of the 3.22 GTK+ release, luckily the last development release of the GTK+ 3-series

Jonas Meurer: apache rewritemap querystring

17 September, 2016 - 22:52
Apache2: Rewrite REQUEST_URI based on a bulk list of GET parameters in QUERY_STRING

Recently I searched for a solution to rewrite a REQUEST_URI based on GET parameters in QUERY_STRING. To make it even more complicated, I had a list of ~2000 parameters that have to be rewritten like the following:

if %{QUERY_STRING} starts with one of <parameters>:
    rewrite %{REQUEST_URI} from /new/ to /old/

Honestly, it took me several hours to find a solution that was satisfying and scales well. Hopefully, this post will save time for others with the need for a similar solution.

Research and first attempt: RewriteCond %{QUERY_STRING} ...

After reading through some documentation, particularly Manipulating the Query String, the following ideas came to my mind at first:

RewriteCond %{REQUEST_URI} ^/new/
RewriteCond %{QUERY_STRING} ^(param1)(.*)$ [OR]
RewriteCond %{QUERY_STRING} ^(param2)(.*)$ [OR]
...
RewriteCond %{QUERY_STRING} ^(paramN)(.*)$
RewriteRule /new/ /old/?%1%2 [R,L]

or instead of an own RewriteCond for each parameter:

RewriteCond %{QUERY_STRING} ^(param1|param2|...|paramN)(.*)$
There has to be something smarter ...

But with ~2000 parameters to look up, neither of the solutions seemed particularly smart. Both scale really bad and probably it's rather heavy stuff for Apache to check ~2000 conditions for every ^/new/ request.

Instead I was searching for a solution to lookup a string from a compiled list of strings. RewriteMap seemed like it might be what I was searching for. I read the Apache2 RewriteMap documentation here and here and finally found a solution that worked as expected, with one limitation. But read on ...

The solution: RewriteMap and RewriteCond ${mapfile:%{QUERY_STRING}} ...

Finally, the solution was to use a RewriteMap with all parameters that shall be rewritten and check given parameters in the requests against this map within a RewriteCond. If the parameter matches, the simple RewriteRule applies.

For the inpatient, here's the rewrite magic from my VirtualHost configuration:

RewriteEngine On
RewriteMap RewriteParams "dbm:/tmp/rewrite-params.map"
RewriteCond %{REQUEST_URI} ^/new/
RewriteCond ${RewriteParams:%{QUERY_STRING}|NOT_FOUND} !=NOT_FOUND
RewriteRule ^/new/ /old/ [R,L]
A more detailed description of the solution

First, I created a RewriteMap at /tmp/rewrite-params.txt with all parameters to be rewritten. A RewriteMap requires two field per line, one with the origin and the other one with the replacement part. Since I use the RewriteMap merely for checking the condition, not for real string replacement, the second field doesn't matter to me. I ended up putting my parameters in both fields, but you could choose every random value for the second field:

/tmp/rewrite-params.txt:

param1 param1
param2 param2
...
paramN paramN

Then I created a DBM hash map file from that plain text map file, as DBM maps are indexed, while TXT maps are not. In other words: with big maps, DBM is a huge performance boost:

httxt2dbm -i /tmp/rewrite-params.txt -o /tmp/rewrite-params.map

Now, let's go through the VirtualHost configuration rewrite magic from above line by line. First line should be clear: it enables the Apache Rewrite Engine:

RewriteEngine On

Second line defines the RewriteMap that I created above. It contains the list of parameters to be rewritten:

RewriteMap RewriteParams "dbm:/tmp/rewrite-params.map"

The third line limits the rewrites to REQUEST_URIs that start with /new/. This is particularly required to prevent rewrite loops. Without that condition, queries that have been rewritten to /old/ would go through the rewrite again, resulting in an endless rewrite loop:

RewriteCond %{REQUEST_URI} ^/new/

The fourth line is the core condition: it checks whether QUERY_STRING (the GET parameters) is listed in the RewriteMap. A fallback value 'NOT_FOUND' is defined if the lookup didn't match. The condition is only true, if the lookup was successful and the QUERY_STRING was found within the map:

RewriteCond ${RewriteParams:%{QUERY_STRING}|NOT_FOUND} !=NOT_FOUND

The last line is a simple RewriteRule from /new/ to /old/. It is executed only if all previous conditions are met. The flags are R for redirect (issuing a HTTP redirect to browser) and L for last (causing mod_rewrite to stop processing immediately after that rule):

RewriteRule ^/new/ /old/ [R,L]
Known issues

A big limitation of this solution (compared to the ones above) is, that it looks up the whole QUERY_STRING in RewriteMap. Therefore, it works only if param is the only GET parameter. In case of additional GET parameters, the second rewrite condition fails and nothing is rewritten even if the first GET parameter is listed in RewriteMap.

If anyone comes up with a solution to this limitation, I would be glad to learn about it :)

Links

Jonas Meurer: data recovery

17 September, 2016 - 22:45
Data recovery with ddrescue, testdisk and sleuthkit

From time to time I need to recover data from disks. Reasons can be broken flash/hard disks as well as accidently deleted files. Fortunately, this doesn't happen to often, which on the downside means that I usually don't remember the details about best practice.

Now that a good friend asked me to recover very important data from a broken flash disk, I take the opportunity to write down what I did and hopefully don't need to read the same docs again next time :)

Disclaimer: I didn't take the time to read through full documentation. This is rather a brief summary of the best practice to my knowledge, not a sophisticated and detailed explanation of data recovery techniques.

Create image with ddrescue

First and most secure rule for recovery tasks: don't work on the original, use a copied image instead. This way you can do, whatever you want without risking further data loss.

The perfect tool for this is GNU ddrescue. Contrary to dd, it doesn't reiterate over a broken sector with I/O errors again and again while copying. Instead, it remembers the broken sector for later and goes on to the next sector first. That way, all sectors that can be read without errors are copied first. This is particularly important as every extra attempt to read a broken sector can further damage the source device, causing even more data loss.

In Debian, ddrescue is available in the gddrescue package:

apt-get install gddrescue

Copying the raw disk content to an image with ddrescue is as easy as:

ddrescue /dev/disk disk-backup.img disk.log

Giving a logfile as third argument has the great advantage that you can interupt ddrescue at any time and continue the copy process later, possibly with different options.

In case of very large disks where only the first part was in use, it might be useful to start with copying the beginning only:

ddrescue -i0 -s20MiB /dev/disk disk-backup.img disk.log

In case of errors after the first run, you should start ddrescue again with direct read access (-d) and tell it to try again bad sectors three times (-r3):

ddrescue -d -r3 /dev/disk disk-backup.img disk.log

If some sectors are still missing afterwards, it might help to run ddrescue with infinite retries for some time (e.g. one night):

ddrescue -d -r-1  /dev/disk disk-backup.img disk.log
Inspect the image

Now that you have an image of the raw disk, you can take a first look at what it contains. If ddrescue was able to recover all sectors, chances are high that no further magic is required and all data is there.

If the raw disk (used to) contain a partition table, take a first look with mmls from sleuthkit:

mmls disk-backup.img

In case of a intact partition table, you can try to create device maps with kpartx after setting up a loop device for the image file:

losetup /dev/loop0 disk-backup.img
kpartx -a /dev/loop0

If kpartx finds partitions, they will be made available at /dev/mapper/loop0p1, /dev/mapper/loop0p2 and so on.

Search for filesystems on the partitions with fsstat from sleuthkit on the partition device map:

fsstat /dev/mapper/loop0p1

Or directly on the image file with the offset discovered by mmls earlier. This also might work in case of

fsstat -o 8064 disk-backup.img

The offset obviously is not needed if the image contains a partition dump (without partition table):

fsstat disk-backup.img

In case that a filesystem if found, simply try to mount it:

mount -t <fstype> -o ro /dev/mapper/loop0p1 /mnt

or

losetup -o 8064 /dev/loop1 disk-backup.img
mount -t <fstype> -o ro /dev/loop1 /mnt
Recover partition table

If the partition table is broken, try to recover it with testdisk. But first, create a second copy of the image, as you will alter it now:

ddrescue disk-backup.img disk-backup2.img
testdisk disk-backup2.img

In testdisk, select a media (e.g. Disk disk-backup2.img) and proceed, then select the partition table type (usually Intel or EFI GPT) and analyze -> quick search. If partitions are found, select one or more and write the partition structure to disk.

Recover files

Finally, let's try to recover the actual files from the image.

testdisk

If the partition table recovery was sucessfull, try to undelete files from within testdisk. Go back to the main menu and select advanced -> undelete.

photorec

Another option is to use the photorec tool that comes with testdisk. It searches the image for known file structures directly, ignoring possible filesystems:

photorec sdb2.img

You have to select either a particular partition or the whole disk, a file system (ext2/ext3 vs. other) and a destination for recovered files.

Last time, photorec was my last resort as the fat32 filesystem was so damaged that testdisk detected only an empty filesystem.

sleuthkit

sleuthkit also ships with tools to undelete files. I tried fls and icat. fls searches for and lists files and directories in the image, searching for parts of the former filesystem. icat copies the files by their inode numer. Last time I tried, fls and icat didn't recover any new files compared to photorec.

Still, for the sake of completeness, I document what I did. First, I invoked fls in order to search for files:

fls -f fat32 -o 8064 -pr disk-backup.img

Then, I tried to backup one particular file from the list:

icat -f fat32 -o 8064 <INODE>

Finally, I used the recoup.pl script from Dave Henk in order to batch-recover all discovered files:

wget http://davehenk.googlepages.com/recoup.pl
chmod +x recoup.pl
vim recoup.pl
[...]
my $fullpath="~/recovery/sleuthkit/";
my $FLS="/usr/bin/fls";
my @FLS_OPT=("-f","fat32","-o","8064","-pr","-m $fullpath","-s 0");
my $FLS_IMG="~/recovery/disk-image.img";
my $ICAT_LOG="~/recovery/icat.log";
my $ICAT="/usr/bin/icat";
my @ICAT_OPT=("-f","fat32","-o","8064");
[...]

Further down, the double quotes around $fullfile needed to be replaced by single quotes (at least in my case, as $fullfile contained a subdir called '$OrphanFiles'):

system("$ICAT @ICAT_OPT $ICAT_IMG $inode > \'$fullfile\' 2>> $ICAT_LOG") if ($inode != 0);

That's it for now. Feel free to comment with suggestions on how to further improve the process of recovering data from broken disks.

Links

Jonas Meurer: hello world

17 September, 2016 - 22:41

Hello world!

Finally, this is my first post to my own blog. For years, I refrained from having an own blog, even though I think about it since 10 years at least.

Today I decided to give it a try, let's see where it takes me. I'll write about Free Software topics from time to time, probably with a strong emphasis on Debian GNU/Linux.

Welcome to everybody who steps by.

Cheers,
mejo

Norbert Preining: Android 7.0 Nougat – Root – PokemonGo

17 September, 2016 - 11:59

Since my switch to Android my Nexus 6p is rooted and I have happily fixed the Android (<7) font errors with Japanese fonts in English environment (see this post). The recently released Android 7 Nougat finally fixes this problem, so it was high time to update.

In addition, a recent update to Pokemon Go excluded rooted devices, so I was searching for a solution that allows me to: update to Nougat, keep root, and run PokemonGo (as well as some bank security apps etc).

After some playing around here are the steps I took:

Installation of necessary components

Warning: The following is for Nexus6p device, you need different image files and TWRP recovery for other devices.

Flash Nougat firmware images

Get it from the Google Android Nexus images web site, unpack the zip and the included zip one gets a lot of img files.

unzip angler-nrd90u-factory-7c9b6a2b.zip
cd angler-nrd90u/
unzip image-angler-nrd90u.zip

As I don’t want my user partition to get flashed, I did not use the included flash script, but did it manually:

fastboot flash bootloader bootloader-angler-angler-03.58.img
fastboot reboot-bootloader
sleep 5
fastboot flash radio radio-angler-angler-03.72.img
fastboot reboot-bootloader
sleep 5
fastboot erase system
fastboot flash system system.img
fastboot erase boot
fastboot flash boot boot.img
fastboot erase cache
fastboot flash cache cache.img
fastboot erase vendor
fastboot flash vendor vendor.img
fastboot erase recovery
fastboot flash recovery recovery.img
fastboot reboot

After that boot into the normal system and let it do all the necessary upgrades. Once this is done, let us prepare for systemless root and possible hiding of it.

Get the necessary file

Get Magisk, SuperSU-magisk, as well as the Magisk-Manager.apk from this forum thread (direct links as of 2016/9: Magisk-v6.zip, SuperSU-v2.76-magisk.zip, Magisk-Manager.apk).

Transfer these two files to your device – I am using an external USB stick that can be plugged into the device, or copy it via your computer or via a cloud service.

Also we need to get a custom recovery image, I am using TWRP. I used the version 3.0.2-0 of TWRP I had already available, but that version didn’t manage to decrypt the file system and hangs. One needs to get at least version 3.0.2-2 from the TWRP web site.

Install latest TWRP recorvery

Reboot into boot-loader, then use fastboot to flash twrp:

fastboot erase recovery
fastboot flash recovery twrp-3.0.2-2-angler.img
fastboot reboot-boatloader

After that select Recovery with the up-down buttons and start twrp. You will be asked you pin if you have one set.

Install Magisk-v6.zip

Select “Install” in TWRP, select the Magisk-v6.zip file, and see you device being prepared for systemless root.

Install SuperSU, Magisk version

Again, boot into TWRP and use the install tool to install SuperSU-v2.76-magisk.zip. After reboot you should have a SuperSU binary running.

Install the Magisk Manager

From your device browse to the .apk and install it.

How to run safety net programs

Those programs that check for safety functions (Pokemon Go, Android Pay, several bank apps) need root disabled. Open the Magisk Manager and switch the root switch to the left (off). After this starting the program should bring you past the special check.

Steinar H. Gunderson: BBR opensourced

17 September, 2016 - 04:30

This is pretty big stuff for anyone who cares about TCP. Huge congrats to the team at Google.

Dirk Eddelbuettel: anytime 0.0.2: Added functionality

16 September, 2016 - 09:28

anytime arrived on CRAN via release 0.0.1 a good two days ago. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects.

This new release 0.0.2 adds two new functions to gather conversion formats -- and set new ones. It also fixed a minor build bug, and robustifies a conversion which was seen to be not quite right under some time zones.

The NEWS file summarises the release:

Changes in Rcpp version 0.0.2 (2016-09-15)
  • Refactored to use a simple class wrapped around two vector with (string) formats and locales; this allow for adding formats; also adds accessor for formats (#4, closes #1 and #3).

  • New function addFormats() and getFormats().

  • Relaxed one tests which showed problems on some platforms.

  • Added as.POSIXlt() step to anydate() ensuring all POSIXlt components are set (#6 fixing #5).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Craig Sanders: Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24

15 September, 2016 - 23:24

It’s Alive!

The day before yesterday (at Infoxchange where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it.

Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn’t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing – but it turned out that many programs would segfault – e.g. it couldn’t run bash, but sh (dash) was OK.

I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn’t yet run in jessie).

After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I’d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23.

Anyway, the point of all this is that if anyone else needs to run a wheezy or squeeze container on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6).

I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie.

To build it, I started with the base wheezy image we’re using and create a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf:

APT::Default-Release "wheezy";

Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, and had to waste another 10 minutes or so building the app’s container again.

I then installed the following:

apt-get -t jessie install libc6 locales libc6-dev krb5-multidev comerr-dev \
    zlib1g-dev libssl-dev libpq-dev

To minimise the risk of incompatible updates, it’s best to install the bare minimum of jessie packages required to get your app running. The only reason I needed to install all of those -dev packages was because we needed libpq-dev, which pulled in all the rest. If your app doesn’t need to talk to postgresql, you can skip them. In fact, I probably should try to build it again without them – I added them after the first build failed but before I remember to set Apt::Default::Release (OTOH, it’s working OK now and we’re probably better off with libssl-dev from jessie).

That worked fine, so I edited the FROM line in the Dockerfile for our wheezy app to use frankenwheezy and ran make build. It built, passed tests, deployed and is running. Now I can continue working on the feature I’m adding to it, but I expect there’ll be a few more yaks to shave before I’m finished.

When I finish what I’m currently working on, I’ll take a look at what needs to be done to get this app running on jessie. Wheezy’s just too old to keep using, and this frankenwheezy needs to float away on an iceberg.

Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24 is a post from: Errata

Mike Gabriel: [Arctica Project] Release of nx-libs (version 3.5.99.1)

14 September, 2016 - 21:20
Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Tuesday, Sep 13th, version 3.5.99.1 of nx-libs has been released [1].

This release brings some code cleanups regarding displayed copyright information and an improvement when it comes to reconnecting to an already running session from an X11 server with a color depths setup that is different from the X11 server setup where the NX/X11 session was originally created on. Furthermore, an issue reported to the X2Go developers has been fixed that caused problems on Windows clients on copy+paste actions between the NX/X11 session and the underlying MS Windows system. For details see X2Go BTS, Bug #952 [3].

Change Log

A list of recent changes (since 3.5.99.0) can be obtained from here.

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component).

References

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้