Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 23 min 29 sec ago

Steinar H. Gunderson: Chinese HDMI-to-SDI converters, part II

27 April, 2017 - 23:45

Following up on my previous post, I have only a few small nuggets of extra information:

The power draw appears to be 230–240 mA at 5 V, or about 1.15 W. (It doesn't matter whether they have a signal or not.) This means you can power them off of a regular USB power bank; in a recent test, we used Deltaco 20000 mAh (at 3.7 V) power banks, which are supposed to power a GoPro plus such a converter for somewhere between 8–12 hours (we haven't measured exactly). It worked brilliantly, and solved a headache of how to get AC to the camera and converter; just slap on a power bank instead, and all you need to run is SDI.

Curiously enough, 230 mA is so little that the power bank thinks it doesn't count as load, and just turns itself off after ten seconds or so. However, with the GoPro also active, it stays on all the time. At least it ran for the two hours that we needed without a hitch.

The SDI converters don't accept USB directly, but you can purchase dumb USB-to-5.5x2.1mm converters cheap from wherever, which works fine even though USB is supposed to give you only 100 mA without handshaking. Some eBay sellers seem to include them with the converters, even. I guess the power bank just doesn't care; it's spec-ed to 2.1 A on two ports and 1 A on the last one.

Yves-Alexis Perez: Debian, grsecurity and passing the baton

27 April, 2017 - 18:18

Since the question popped here and there, I'll post a short blog post about the issue right now so there's a reference somewhere.

As you may know, Brad Spengler (spender) and the Pax Team recently announced that the grsecurity test patches won't be released publicly anymore. The stable patches were already restricted to enterprise, paying customers, this is now also the case for the test patches.

Obviously that means the end of the current situation in Debian since I used those test patches for the linux-grsec packages, but I'm not exactly sure what comes next and I need to think a bit about this before doing anything.

The “passing the baton” post mention a handover to the community (though the FAQ mention it needs to stop using the term “grsecurity”) so maybe there's some coordination possible with other users like Gentoo Hardened and Alpine, but it's not clear what would be possible with the tools we have.

I'm actually quite busy right now so I don't have much time to think about all this, but expect a new blog post when things have settled a bit and I've made up my mind.

Russ Allbery: Review: Necessity

27 April, 2017 - 11:42

Review: Necessity, by Jo Walton

Series: Thessaly #3 Publisher: Tor Copyright: July 2016 ISBN: 0-7653-7902-3 Format: Hardcover Pages: 331

Athena's experiment with a city (now civilization) modeled after Plato's Republic continues, but in a form that she would not have anticipated, and in a place rather far removed from its origins. But despite new awareness of the place and role of gods, a rather dramatic relocation, and unanticipated science-fiction complications, it continues in much the same style as in The Just City: thoughtful, questioning debate, a legal and social system that works surprisingly well, and a surprising lack of drama. At least, that is, until the displaced cities are contacted by the mainstream of humanity, and Athena goes unexpectedly missing.

The latter event turns out to have much more to do with the story than the former, and I regret that. Analyzing mainline human civilization and negotiating the parameters of a very odd first contact would have, at least in my opinion, lined up a bit better with the strengths of this series. Instead, the focus is primarily on metaphysics, and the key climactic moment in those metaphysics is rather mushy and incoherent compared to the sharp-edged analysis Walton's civilization is normally capable of. Not particularly unexpected, as metaphysics of this sort are notoriously tricky to approach via dialectical logic, but it was a bit of a letdown. Much of this book deals with Athena's disappearance and its consequences (including the title), and it wasn't bad, but it wanders a bit into philosophical musings on the nature of gods.

Necessity is a rather odd book, and I think anyone who started here would be baffled, but it does make a surprising amount of sense in the context of the series. Skipping ahead to here seems like a truly bad idea, but reading the entire series (relatively closely together) does show a coherent philosophical, moral, and social arc. The Just City opens with Apollo confronted by the idea of individual significance: what does it mean to treat other people as one's equals in an ethical sense, even if they aren't on measures of raw power? The Thessaly series holds to that theme throughout and follows its implications. Many of the bizarre things that happen in this series seem like matter-of-fact outcomes once you're engrossed in the premises and circumstances at the time. Necessity adds a surprising amount of more typical science fiction trappings, but they turn out to be ancillary to the story. What matters is considered action, trying to be your best self, and the earnest efforts of a society to put those principles first.

And that's the strength of the whole series, including Necessity: I like these people, I like how they think, and I enjoy spending time with them, almost no matter what they're doing. As with the previous books, we get interwoven chapters from different viewpoints, this time from three primary characters plus some important "guest" chapters. As with the previous books, the viewpoint characters are different again, mostly a generation younger, and I had to overcome my initial disappointment at not hearing the same voices. But Walton is excellent at characterization. I really like this earnest, thoughtful, oddly-structured society that always teeters on the edge of being hopelessly naive and trusting but is self-aware enough to never fall in. By the end of the book, I liked this round of characters nearly as much as I liked the previous rounds (although I've still never liked a character in these books as well as I liked Simmea).

I think one incomplete but important way to sum up the entire Thessaly series is that it's a trilogy of philosophical society-building on top of the premise of a universal love for and earnest, probing, thoughtful analysis of philosophy. Walton's initial cheat is to use an deus ex machina to jumpstart such a society from a complex human world that would be unlikely to provide enough time or space for it to build its own separate culture and tradition. I think the science-fiction trick is required to make this work — real-world societies that try this end up having to spend so much of their energy fighting intrusion from the outside and diffusion into the surrounding culture that they don't have the same room to avoid conformity and test and argue against their own visions.

Necessity is not at all the conclusion of that experiment I would expect, but it won me over, and I think it worked, even if a few bits of it felt indulgent. Most importantly for that overall project, this series is generational, and Necessity shows how it would feel to grow up deep inside it, seeing evolution on top of a base structure that is ubiquitous and ignored. Even the generation in The Philosopher Kings wasn't far enough removed to support that; Necessity is, and in a way this book shows how distinctly different and even alien human culture can become when it has space to evolve on top of different premises. I enjoyed the moments of small surprise, where characters didn't react the way that I'd expect for reasons now buried generations-deep in their philosophical foundations.

This book will not win you over if you didn't already like the series, and I suspect it will lose a few people who read the previous two books. The plot structure is a little strange, the metaphysics are a touch strained, and the ending is, well, not quite the payoff that I was hoping for, although it's thematically appropriate and grew on me after a few days of thinking it over. But I got more Socrates, finally, who is as delightful as always and sorely needed to add some irreverence and contrariness to the the mix. And I got to read more about practical, thoughtful people who are trying hard to do their best, to be their best selves, and to analyze and understand the world. There's something calming, delightful, and beautifully optimistic about their approach, and I'm rather sad to not have more of it to read.

Rating: 7 out of 10

Sven Hoexter: Chrome 58 ignores commonName in certificates

26 April, 2017 - 17:08

People using Chrome might have already noticed that some internal certificates created without a SubjectAlternativeName extension fail to verify. Finally the Google Chrome team stepped forward, and after only 17 years of having SubjectAlternativeName as the place for FQDNs to verify as valid for a certificate, they started to ignore the commonName. See also https://www.chromestatus.com/feature/4981025180483584.

Currently Debian/stretch still has Chromium 57 but Chromium 58 is already in unstable. So some more people might notice this change soon. I hope that everyone who maintains some broken internal scripting to maintain internal CAs now re-reads the OpenSSL Cookbook to finally fix this stuff. In general I recommend to base your internal CA scripting on easy-rsa to avoid making every mistake in certificate management on your own.

Russ Allbery: Review: Zero Bugs and Program Faster

26 April, 2017 - 10:17

Review: Zero Bugs and Program Faster, by Kate Thompson

Publisher: Kate Thompson Copyright: 2015 Printing: December 2016 ISBN: 0-9961933-0-8 Format: Trade paperback Pages: 169

Zero Bugs and Program Faster is another book I read for the engineering book club at work. Unlike a lot of the previous entries, I'd never heard about it before getting it for the book club and had no idea what to expect. What I got was a lavishly-illustrated book full of quirky stories and unusual turns of presentation on the general subject of avoiding bugs in code. Unfortunately, it's not a very deep examination of the topic. All that an experienced programmer is likely to get out of this book is the creative storytelling and the occasionally memorable illustration.

I knew that this may not be a book aimed at me when I read the first paragraph:

If two books in a bookstore teach the same thing, I take the shorter one. Why waste time reading a 500-page book if I can learn the same in 100 pages? In this book, I kept things clear and brief, so you can ingest the information quickly.

It's a nice thought, but there are usually reasons why the 500-page book has 400 more pages, and those are the things I was missing here. Thompson skims over the top of every topic. There's a bit here on compiler warnings, a bit on testing, a bit on pair programming, and a bit on code review, but they're all in extremely short chapters, usually with some space taken up with an unusual twist of framing. This doesn't leave time to dive deep into any topic. You won't be bored by this book, but most of what you'll earn is "this concept exists." And if you've been programming for a while, you probably know that already.

I learned during the work discussion that this was originally a blog and the chapters are repurposed from blog posts. I probably should have guessed at that, since that's exactly what the book feels like. It's a rare chapter that's longer than a couple of pages, including the room for the illustrations.

The illustrations, I must say, are the best part of the book. They're are a ton of them, sometimes just serving the purpose of stock photography to illustrate a point (usually with a slightly witty caption), sometimes a line drawing to illustrate some point about code, sometimes an illustrated mnemonic for the point of that chapter. The book isn't available in a Kindle edition precisely because including the illustrations properly is difficult to do (per its Amazon page).

As short as this book is, only about two-thirds of it is chapters about programming. The rest is code examples (not that the chapters themselves were light on code examples). Thompson introduces these with:

One of the best ways to improve your programming is to look at examples of code written by others. On the next pages you will find some code I like. You don't need to read all of it: take the parts you like, and skip the parts you don't. I like it all.

I agree with this sentiment: reading other people's code is quite valuable. But it's also very easy to do, given a world full of free software and GitHub repositories. When reading a book, I'm looking for additional insight from the author. If the code examples are beautiful enough to warrant printing here, there presumably is some reason why. But Thompson rarely does more than hint at the reason for inclusion. There is some commentary on the code examples, but it's mostly just a discussion of their origin. I wanted to know why Thompson found each piece of code beautiful, as difficult as that may be to describe. Without that commentary, it's just an eclectic collection of code fragments, some from real systems and some from various bits of stunt programming (obfuscation contests, joke languages, and the like).

The work book club discussion showed that I wasn't the only person disappointed by this book, but some people did like it for its unique voice. I don't recommend it, but if it sounds at all interesting, you may want to look over the corresponding web site to get a feel for the style and see if this is something you might enjoy more than I did.

Rating: 5 out of 10

Dirk Eddelbuettel: RcppTOML 0.1.3

26 April, 2017 - 08:01

A new bug fix release of RcppTOML arrived on CRAN today. Table arrays were (wrongly) not allowing for nesting; a simply recursion fix addresses this.

RcppTOML brings TOML to R. TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML -- though sadly these may be too ubiquitous now. TOML is making good inroads with newer and more flexible projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka "packages") for the Rust language.

Changes in version 0.1.3 (2017-04-25)
  • Nested TableArray types are now correctly handled (#16)

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppTOML page page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Keith Packard: TeleMini3

25 April, 2017 - 23:01
TeleMini V3.0 Dual-deploy altimeter with telemetry now available

TeleMini v3.0 is an update to our original TeleMini v1.0 flight computer. It is a miniature (1/2 inch by 1.7 inch) dual-deploy flight computer with data logging and radio telemetry. Small enough to fit comfortably in an 18mm tube, this powerful package does everything you need on a single board:

  • 512kB on-board data logging memory, compared with 5kB in v1.

  • 40mW, 70cm ham-band digital transceiver for in-flight telemetry and on-the-ground configuration, compared to 10mW in v1.

  • Transmitted telemetry includes altitude, speed, acceleration, flight state, igniter continutity, temperature and battery voltage. Monitor the state of the rocket before, during and after flight.

  • Radio direction finding beacon transmitted during and after flight. This beacon can be received with a regular 70cm Amateur radio receiver.

  • Barometer accurate to 100k' MSL. Reliable apogee detection, independent of flight path. Barometric data recorded on-board during flight. The v1 boards could only fly to 45k'.

  • Dual-deploy with adjustable apogee delay and main altitude. Fires standard e-matches and Q2G2 igniters.

  • 0.5” x 1.7”. Fits easily in an 18mm tube. This is slightly longer than the v1 boards to provide room for two extra mounting holes past the pyro screw terminals.

  • Uses rechargeable Lithium Polymer battery technology. All-day power in a small and light-weight package.

  • Learn more at http://www.altusmetrum.org/TeleMini/

  • Purchase these at http://shop.gag.com/home-page/telemini-v3.html

I don't have anything in these images to show just how tiny this board is—but the spacing between the screw terminals is 2.54mm (0.1in), and the whole board is only 13mm wide (1/2in).

This was a fun board to design. As you might guess from the version number, we made a couple prototypes of a version 2 using the same CC1111 SoC/radio part as version 1 but in the EasyMini form factor (0.8 by 1.5 inches). Feed back from existing users indicated that bigger wasn't better in this case, so we shelved that design.

With the availability of the STM32F042 ARM Cortex-M0 part in a 4mm square package, I was able to pack that, the higher power CC1200 radio part, a 512kB memory part and a beeper into the same space as the original TeleMini version 1 board. There is USB on the board, but it's only on some tiny holes, along with the cortex SWD debugging connection. I may make some kind of jig to gain access to that for configuration, data download and reprogramming.

For those interested in an even smaller option, you could remove the screw terminals and battery connector and directly wire to the board, and replace the beeper with a shorter version. You could even cut the rear mounting holes off to make the board shorter; there are no components in that part of the board.

Daniel Pocock: FSFE Fellowship Representative, OSCAL'17 and other upcoming events

25 April, 2017 - 19:57

The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.

I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.

Please consider becoming an FSFE fellow or donor

The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.

Attending OSCAL'17, Tirana

During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.

What is your view on the Fellowship and FSFE structure?

Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.

In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.

Tim Retout: Packet.net arm64 servers

25 April, 2017 - 19:38

Packet.net offer an ARMv8 server with 96 cores for $0.50/hour. I signed up and tried building Libreoffice to see what would happen. Debian isn't officially supported there yet, but they offer Ubuntu, which suffices for testing the hardware.

Final build time: around 12 hours, compared to 2hr 55m on the official arm64 buildd.

Most of the Libreoffice build appeared to consist of "touch /some/file" repeated endlessly - I have a suspicion that the I/O performance might be low on this server (although I have no further evidence to offer for this). I think the next thing to try is building on a tmpfs, because the server has 128GB RAM available, and it's a shame not to use it.

Wouter Verhelst: Removing git-lfs

25 April, 2017 - 16:56

Git is cool, for reasons I won't go into here.

It doesn't deal very well with very large files, but that's fine; when using things like git-annex or git-lfs, it's possible to deal with very large files.

But what if you've added a file to git-lfs which didn't need to be there? Let's say you installed git-lfs and told it to track all *.zip files, but then it turned out that some of those files were really small, and that the extra overhead of tracking them in git-lfs is causing a lot of grief with your users. What do you do now?

With git-annex, the solution is simple: you just run git annex unannex <filename>, and you're done. You may also need to tell the assistant to no longer automatically add that file to the annex, but beyond that, all is well.

With git-lfs, this works slightly differently. It's not much more complicated, but it's not documented in the man page. The naive way to do that would be to just run git lfs untrack; but when I tried that it didn't work. Instead, I found that the following does work:

  • First, edit your .gitattributes files, so that the file which was (erroneously) added to git-lfs is no longer matched against a git-lfs rule in your .gitattributes.
  • Next, run git rm --cached <file>. This will tell git to mark the file as removed from git, but crucially, will retain the file in the working directory. Dropping the --cached will cause you to lose those files, so don't do that.
  • Finally, run git add <file>. This will cause git to add the file to the index again; but as it's now no longer being smudged up by git-lfs, it will be committed as the normal file that it's supposed to be.

Reproducible builds folks: Reproducible Builds: week 104 in Stretch cycle

25 April, 2017 - 14:38

Here's what happened in the Reproducible Builds effort between Sunday April 16 and Saturday April 22 2017:

Upcoming events
  • On April 26th Chris Lamb will give a talk at foss-north 2017 in Gothenburg, Sweden on Reproducible Builds.

  • Between May 5th-7th the Reproducible Builds Hackathon 2017 will take place in Hamburg, Germany.

  • On May 13th Chris Lamb will give a talk at OSCAL'17 in Tirana, Albania on Reproducible Builds.

Reproducible work in other projects Packages reviewed and fixed, and bugs filed

Chris Lamb:

Chris West:

Reviews of unreproducible packages

37 package reviews have been added, 64 have been updated and 16 have been removed in this week, adding to our knowledge about identified issues.

One issue type has been updated:

Two issue types have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (2)
diffoscope development Misc.

This week's edition was written by Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Mike Gabriel: Making Debian experimental's X2Go Server Packages available on Ubuntu, Mint and alike

24 April, 2017 - 21:48

Often I get asked: How can I test the latest nx-libs packages [1] with a stable version of X2Go Server [2] on non-Debian, but Debian-like systems (e.g. Ubuntu, Mint, etc.)?

This is quite easy, if you are not scared of building binary Debian packages from Debian source packages. Until X2Go Server (and NXv3) will be made available in Debian unstable, the brave testers should follow the below installation recipe.

Step 1: Add Debian experimental as Source Package Source

Add Debian experimental as source package provider:

$ echo "deb-src http://httpredir.debian.org/debian experimental main" | sudo tee /etc/apt/sources.list.d/debian-experimental.list
$ sudo apt-get update
Step 2: Obtain Build Tools and Build Dependencies

When building software, you need to have some extra packages. Those packages will not be needed at runtime of the built piece of software, so you may want to take some notes on what extra packages get installed with the below step. If you plan rebuilding X2Go Server and NXv3 several times, then simply leave the build dependencies installed:

$ sudo apt-get build-dep nx-libs
$ sudo apt-get build-dep x2goserver
Step 3: Build NXv3 and X2Go Server from Source

Building NXv3 (aka nx-libs) takes a while, so it may be time to get some coffee now... The build process should not run as superuser root. Stay with your normal user account.

$ mkdir Development/ && cd Development/
$ apt-get source -b nx-libs

[... enjoy your coffee, there'll be much output on your screen... ]

$ apt-get source -b x2goserver

In your working directoy, you should now find various new files ending with .deb.

Step 4: Install the built packages

These .deb files we will now install. It does not hurt to simply install all of them:

sudo dpkg -i *.deb

The above command might result in some error messages. Ignore them, you can easily fix them by installing the missing runtime dependencies:

sudo apt-get install -f
Play it again, Sam

If you want to re-do the above with some new nx-libs or x2goserver source package version, simply create an empty folder and repeat those steps above. The dpkg command will install the .deb files over the currently installed package versions and update your system with your latest build.

The disadvantage of this build-from-source approach (it is a temporary recommendation until X2Go Server & co. have landed in Debian unstable), that you have to check for updates manually from time to time.

Recommended versions

For X2Go Server, the 4.0.1.x release series is considerably stable. The version shipped with Debian has been patched to work with the upcoming nx-libs 3.6.x series, but also tolerates the older 3.5.0.x series as shipped with X2Go's upstream packages.

For NXv3 (aka nx-libs) we recommend using (thus, waiting for) the 3.5.99.6 release. The package has been uploaded to Debian experimental already, but waits in Debian NEW for some minor ftp-master ACK (we added one binary package with the recent upload).

References

Mike Gabriel: [Arctica Project] Release of nx-libs (version 3.5.99.6)

24 April, 2017 - 21:12
Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Friday, Apr 21st 2017, version 3.5.99.6 of nx-libs has been released [1].

As some of you might have noticed, the release announcements for 3.5.99.4 and 3.5.99.5 have never been posted / written, so this announcement lists changes introduced since 3.5.99.3.

Credits

There are alway many people to thank, so I won't mention all here. The person I need to mention here is Mihai Moldovan, though. He virtually is our QA manager, although not officially entitled. The feedback he gives on code reviews is sooo awesome!!! May you be available to our project for a long time. Thanks a lot, Mihai!!!

Changes between 3.5.99.4 and 3.5.99.3
  • Use RPATH in nxagent now for finding libNX_X11 (our fake libX11 with nxcomp support added).
  • Drop support for various archaic platforms.
  • Fix valgrind issue in new Xinerama code.
  • Regression: Fix crash due to incompletely backported code in 3.5.99.3.
  • RPM packaging review by nx-libs's official Fedora maintainer (thanks, Orion!).
  • Update roll-tarball.sh script.
Changes between 3.5.99.5 and 3.5.99.4
  • Support building against libXfont2 API (using Xfont2, if available in build environment, otherwise falling back to Xfont(1) API)
  • Support built-in fonts, no requirement anymore to have the misc fonts installed.
  • Various Xserver os/ and dix/ backports from X.org.
  • ABI backports: CreatePixmap allocation hints, no index in CloseScreen() destructors propagated anymore, SetNotifyFd ABI, GetClientCmd et al. ABI.
  • Add quilt based patch system for bundled Mesa.
  • Fix upstream ChangeLog creation in roll-tarball.sh.
  • Keystroke.c code fully revisited by Ulrich Sibiller (Thanks!!!).
  • nxcomp now builds again on Cygwin. Thanks to Mike DePaulo for providing the patch!
  • Bump libNX_X11 to a status that resembles latest X.org libX11 HEAD. (Again, thanks Ulrich!!!).
  • Various changes to make valgrind more happy (esp. uninitialized memory issues).
  • Hard-code RGB color values, drop previously shipped rgb.txt config file.
Changes between 3.5.99.6 and 3.5.99.5
  • Regression fix for 3.5.99.5: Now fonts are display correctly again after session resumption.
  • Prefer source tree's nxcomp to system-wide installed nxcomp headers.
  • Provide nxagent specific auto-detection code for available display numbers (see nxagent man page for details, under -displayfd).
  • Man page updates for nxagent (thanks, Ulrich!).
  • Make sure that xkbcomp is available to nxagent (thanks, Mihai!).
  • Switch on building nxagent with MIT-SCREEN-SAVER extension.
  • Fix FTBFS on SPARC64, arm64 and m86k Linux platforms.
  • Avoid duplicate runs of 'make build' due to flaw in main Makefile.
Change Log

Lists of changes (since 3.5.99.3) can be obtained from here (3.5.99.3 -> .4), here (3.5.99.4 -> .5) and here (3.5.99.5 -> .6)

Known Issues

A list of known issues can be obtained from the nx-libs issue tracker [issues].

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component). The nxagent Xserver can be used from remote sessions (via nxcomp compression library) or as a next Xserver.

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. At the moment, you can obtain nx-libs builds for Ubuntu 16.10 (yakkety) and 17.04 (zenial) as nightly builds.

References

Norbert Preining: Leaving for BachoTeX 2017

24 April, 2017 - 09:20

Tomorrow we are leaving for TUG 2017 @ BachoTeX, one of the most unusual and great series of conferences (BachoTeX) merged with the most important series of TeX conference (TUG). I am looking forward to this trip and to see all the good friends there.

And having the chance to visit my family at the same time in Vienna makes this trip, how painful the long flight with our daughter will be, worth it.

See you in Vienna and Bachotek!

PS: No pun intended with the photo-logo combination, just a shot of the great surroundings in Bachotek

Mark Brown: Bronica Motor Drive SQ-i

23 April, 2017 - 20:17

I recently got a Bronica SQ-Ai medium format film camera which came with the Motor Drive SQ-i. Since I couldn’t find any documentation at all about it on the internet and had to figure it out for myself I figured I’d put what I figured out here. Hopefully this will help the next person trying to figure one out, or at least by virtue of being wrong on the internet I’ll be able to get someone who knows what they’re doing to tell me how the thing really works.

The motor drive attaches to the camera using the tripod socket, a replacement tripod socket is provided on the base of plate. There’s also a metal plate with the bottom of the hand grip attached to it held on to the base plate with a thumb screw. When this is released it gives access to the screw holding in the battery compartment which (very conveniently) takes 6 AA batteries. This also provides power to the camera body when attached.

On the back of the base of the camera there’s a button with a red LED next to it which illuminates slightly when the button is pressed (it’s visible in low light only). I’m not 100% sure what this is for, I’d have guessed a battery check if the light were easier to see.

On the top of the camera there is a hot shoe (with a plastic blanking plate, a nice touch), a mode selector and two buttons. The larger button on the front replicates the shutter release button on the body (which continues to function as well) while the smaller button to the rear of the camera controls the motor – depending on the current state of the camera it cocks the shutter, winds the film and resets the mirror when it is locked up. The mode dial offers three modes: off, S and C. S and C appear to correspond to the S and C modes of the main camera, single and continuous mirror lockup shots.

Overall with this grip fitted and a prism attached the camera operates very similarly to a 35mm SLR in terms of film winding and so on. It is of course heavier (the whole setup weighs in at 2.5kg) but balanced very well and the grip is very comfortable to use.

Andreas Metzler: balance sheet snowboarding season 2016/17

23 April, 2017 - 19:03

Another year of minimal snow. Again there was early snowfall in the mountains at the start of November, but the snow was gone soon again. There was no snow up to 2000 meters of altitude until about January 3. Christmas week was spent hiking up and taking the lift down.

I had my first day on board on January 6 on artificial snow, and the first one on natural snow on January 19. Down where I live (800m), snow was tight the whole winter, never topping 1m. Measuring station Diedamskopf at 1800m above sea-level topped at slightly above 200cm, on April 19. Last boarding day was yesterday (April 22) in Warth with hero-conditions.

I had a preopening on the glacier in Pitztal at start of November with Pure Boarding. However due to the long waiting-period between pre-opening and start of season it did not pay off. By the time I rode regularily I had forgotten almost everything I learned at Carving-School.

Nevertheless I strong season due to long periods on stable, sunny weather with 30 days on piste (counting the day I went up and barely managed a single blind run in superdense fog).

Anyway, here is the balance-sheet:

2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15 2015/16 2016/17 number of (partial) days251729373030252330241730 Damüls1010510162310429944 Diedamskopf154242313414191131223 Warth/Schröcken030413100213 total meters of altitude12463474096219936226774202089203918228588203562274706224909138037269819 highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m132781101512245 # of runs309189503551462449516468597530354634

Enrico Zini: Splitting a git-annex repository

23 April, 2017 - 01:48

I have a git annex repo for all my media that has grown to 57866 files and git operations are getting slow, especially on external spinning hard drives, so I decided to split it into separate repositories.

This is how I did it, with some help from #git-annex. Suppose the old big repo is at ~/oldrepo:

# Create a new repo for photos only
mkdir ~/photos
cd photos
git init
git annex init laptop

# Hardlink all the annexed data from the old repo
cp -rl ~/oldrepo/.git/annex/objects .git/annex/

# Regenerate the git annex metadata
git annex fsck --fast

# Also split the repo on the usb key
cd /media/usbkey
git clone ~/photos
cd photos
git annex init usbkey
cp -rl ../oldrepo/.git/annex/objects .git/annex/
git annex fsck --fast

# Connect the annexes as remotes of each other
git remote add laptop ~/photos
cd ~/photos
git remote add usbkey /media/usbkey

At this point, I went through all repos doing standard cleanup:

# Remove unneeded hard links
git annex unused
git annex dropunused --force 1-12345

# Sync
git annex sync

To make sure nothing is missing, I used git annex find --not --in=here to see if, for example, the usbkey that should have everything could be missing some thing.

Manuel A. Fernandez Montecelo: Debian GNU/Linux port for RISC-V 64-bit (riscv64)

22 April, 2017 - 09:00

This is a post describing my involvement with the Debian GNU/Linux port for RISC-V (unofficial and not endorsed by Debian at the moment) and announcing the availability of the repository (still very much WIP) with packages built for this architecture.

If not interested in the story but you want to check the repository, just jump to the bottom.

Roots

A while ago, mostly during 2014, I was involved in the Debian port for OpenRISC (or1k) ─ about which I posted (by coincidence) exactly 2 years ago.

The two of us working on the port stopped in August or September of that year, after knowing that the copyright of the code to add support for this architecture in GCC would not be assigned to the FSF, so it would never be added to GCC upstream ─ unless the original authors changed their mind (which they didn't) or there was a clean-room reimplementation (which didn't happen so far).

But a few other key things contributed to the decision to stop working on that port, which bear direct relationship to this story.

One thing that particularly influenced me to stop working on it was a sense of lack of purpose, all things considered, for the OpenRISC port that we were working on.

For example, these chips are sometimes used as part of bigger devices by Samsung to control or wake up other chips; but it was not clear whether there would ever be devices with OpenRISC as the main chip, specially in devices powerful enough to run Linux or similar kernels, and Debian on top. One can use FPGAs to synthesise OpenRISC or1k, but these are slow, and expensive when using lots of memory.

Without prospects of having hardware easily available to users, there's not much point in having a whole Debian port ready to run on hardware that never comes to be.

Yeah, sure, it's fun to create such a port, but it's tons of work to maintain and keep it up to-date forever, and with close to zero users it's very unrewarding.

Another thing that contributed to decide to stop is that, at least in my opinion, 32-bit was not future-proof enough for general purpose computing, specially for new devices and ports starting to take off on that time and age. There was some incipient work to create another OpenRISC design for 64-bits, but it was still in an early phase.

My secret hope and ultimate goal was to be able to run as much a free-ish computer as possible as my main system. Still today many people are buying and using 32-bit devices, like small boards; but very few use it as their main computer or servers for demanding workloads or complex services. So for me, even if feasible if one is very austere and dedicated, OpenRISC or1k failed that test.

And lastly, another thing happened at the time...

Enter RISC-V

In August 2014, at the point when we were fully acknowledging the problem of upstreaming (or rather, lack thereof) the support for OpenRISC in GCC, RISC-V was announced to the world, bringing along papers with suggestive titles such as Instruction Sets Should Be Free: The Case For RISC-V (pdf) and articles like RISC-V: An Open Standard for SoCs - The case for an open ISA in EE Times.

RISC-V (as the previous RISC-n marks) had been designed (or rather, was being designed, because it was and is an as yet unfinished standard) by people from UC Berkeley, including David Patterson, the pioneer of RISC computer designs and co-author of the seminal book “Computer Architecture: A Quantitative Approach”. Other very capable people are also also leading the project, doing the design and legwork to make it happen ─ see the list of contributors.

But, apart from throwing names, the project has many other merits.

Similarly to OpenRISC, RISC-V is an open instruction set architecture (ISA), but with the advantage of being designed in more recent times (thus avoiding some mistakes and optimising for problems discovered more recently, as technology evolves); with more resources; with support for instruction widths of 32, 64 and even 128 bits; with a clean set of standard but optional extensions (atomics, float, double, quad, vector, ...); and with reserved space to add new extensions in ordered and compatible ways.

In the view of some people in the OpenRISC community, this unexpected development in a way made irrelevant the ongoing update of OpenRISC for 64-bits, and from what I know and for better or worse, all efforts on that front stopped.

Also interestingly (if nothing else, for my purposes of running as much a free system as possible), was that the people behind RISC-V had strong intentions to make it useful to create modern hardware, and were pushing heavily towards it from the beginning.

And together with this announcement or soonly after, it came the promise of free-ish hardware in the form of the lowRISC project. Although still today it seems to be a long way from actually shipping hardware, at least there was some prospect of getting it some day.

On top of all that, about the freedom aspect, both the Berkeley and lowRISC teams engaged since very early on with the free hardware community, including attending and presenting at OpenRISC events; and lowRISC intended to have as much free hardware as possible in their planned SoC.

Starting to work on the Debian port, this time for RISC-V

So in late 2014 I slowly started to prepare the Debian port, switching on and off.

The Userland spec was frozen in 2014 just before the announcement, the Privilege one is still not frozen today, so there was no need to rush.

There were plans to upstream the support in the toolchain for RISC-V (GNU binutils, glibc and GCC; Linux and other useful software like Qemu) in 2015, but sadly these did not materialise until late 2016 and 2017. One of the main reasons for the delay was due to the slowness to sort out the copyright assignment of the code to the FSF (again). Still today, only binutils and GCC are upstreamed, and Linux and glibc depend on the Privilege spec being finished, so it will take a while.

After the experience with OpenRISC and the support in GCC, I didn't want to invest too much time, lest it all became another dead-end due to lack of upstreaming ─ so I was just cross-compiling here and there, testing Qemu (which still today is very limited for this architecture, e.g. no network support and very limited character and block devices) and trying to find and report bugs in the implementations, and send patches (although I did not contribute much in the end).

Incompatible changes in the toolchain

In terms of compiling packages and building-up a repository, things were complicate, and less mature and stable than the OpenRISC ones were even back in 2014.

In theory, with the Userland spec being frozen, regular programs (below the Operating System level) compiled at any time could have run today; but in practice what happened is that there were several rounds of profound ─or, at least, disrupting─ changes in the toolchain before and while being upstreamed, which made the binary packages that I had built to not work at all (changes in dynamic loader, registers where arguments are stored when jumping functions, etc.).

These major breakages happened several times already, and kind of unexpectedly ─ at least for the people not heavily involved in the implementation.

When the different pieces are upstreamed it is expected that these breakages won't happen; but still there's at least the fundamental bit of glibc, which will probably change things once again in incompatible ways before or while being upstreamed.

Outside Debian but within the FOSS / Linux world, the main project that I know of is that some people from Fedora also started a port in mid 2016 and did great advances, but ─from what I know─ they put the project in the freezer in late 2016 until all such problems are resolved ─ they don't want to spend time rebootstrapping again and again.

What happened recently on the Debian front

In early 2016 I created the page for RISC-V in the Debian wiki, expecting that things were at last fully stable and the important bits of the toolchain upstreamed during that year ─ I was too optimistic.

Some other people (including Debian folks) have been contributing for a while, in the wiki, mailing lists and IRC channels, and in the RISC-V project mailing lists ─ you will see their names everywhere.

However, due to the combination of lack of hardware, software not upstreamed and shortcomings of emulators (chiefly Qemu) make contributions hard and very tedious, nothing much happened recently visible to the outside world in terms of software.

The private repository-in-the-making

In late 2015 and beginning of 2016, having some free time in my hands and expecting that all things would coalesce quickly, I started to build a repository of binary packages in a more systematic way, with most of the basic software that one can expect in a basic Debian system (including things common to all Linux systems, and also specific Debian software like dpkg or apt, and even aptitude!).

After that I also built many others outside the basic system (more than 1000 source packages and 2000 or 3000 arch-dependent binary packages in total), specially popular libraries (e.g. boost, gtk+ version 2 and 3), interpreters (several versions of lua, perl and python, also version 2 and 3) and in general packages that are needed to build many other packages (like doxygen). Unfortunately, some of these most interesting packages do not compile cleanly (more because of obscure or silly errors than proper porting), so they are not included at the moment.

I intentionally avoided trying to compile thousands of packages in the archive which would be of nobody's use at this point; but many more could be compiled without much effort.

About the how, initially I started cross-compiling and using rebootstrap, which was of great help in the beginning. Some of the packages that I cross-compiled had bugs that I did not know how to debug without a “live” and “native” (within emulators) system, so I tried to switch to “natively” built packages very early on. For that I needed many packages built natively (like doxygen or cmake) which would be unnecessary if I remained cross-compiling ─ the host tools would be used in that case.

But this also forced me to eat my own dog food, which even if much slower and tedious, it was on the whole a more instructive experience; and above all, it helped to test and make sure that the the tools and the whole stack was working well enough to build hundreds of packages.

Why the repository-in-the-making was private

Until now I did not attempt to make the repository available on-line, for several reasons.

First because it would be kind of useless to publish files that were not working or would soon not work, due to the incompatible changes in the toolchain, rendering many or most of the packages built useless. And because, for many months now, I expected that things would stabilise and to have something stable “really soon now” ─ but this didn't happen yet.

Second because of lack of resources and time since mid 2016, and because I got some packages only compiled thanks to (mostly small and unimportant, but undocumented and unsaved) hacks, often working around temporary bugs and thus not worth sending upstream; but I couldn't share the binaries without sharing the full source and fulfill licenses like the GNU GPL. I did a new round of clean rebuilds in the last few weeks, just finished, the result is close to 1000 arch-dependent packages.

And third, because of lack of demand. This changed in the last few weeks, when other people started to ask me to share the results even if incomplete or not working properly (I had one request in the past, but couldn't oblige in time at the time).

Finally, the work so far: repository now on-line

So finally, with the great help from Kurt Keville from MIT, and Bytemark sponsoring a machine where most of the packages were built, here we have the repository:

The lines for /etc/apt/sources.list are:

 deb [ arch=riscv64 signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main
 deb-src [ signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main

The repository is signed with my key as Debian Developer, contained in the file /usr/share/keyrings/debian-keyring.gpg, which is part of the package debian-keyring (available from Debian and derivatives).

WARNING!!

This repository, though, is very much WIP, incomplete (some package dependencies cannot be fulfilled, and it's only a small percentage of the Debian archive, not trying to be comprehensive at the moment) and probably does not work at all in your system at this point, for the following reasons:

  • I did not create Debian packages for fundamental pieces like glibc, gcc, linux, etc.; although I hope that this happens soon after the next stable release (Stretch) is out of the door and the remaining pieces are upstreamed (help welcome).
     
  • There's no base system image yet, containing the software above plus a boot loaders and sequence to set up the system correctly. At the moment you have to provide your own or use one that you can find around the net. But hopefully we will provide an image soon. (Again, help welcome).
     
  • Due to ongoing changes in the toolchain, if the version of the toolchain in your image is very new or very old, chances are that the binaries won't work at all. The one that I used was the master branch on 20161117 of their fork of the toolchain: https://github.com/riscv/riscv-gnu-toolchain

The combination of all these shortcomings is specially unfortunate, because without glibc provided it will be difficult that you can get the binaries to run at all; but there are some packages that are arch-dependent but not too tied to libc or the dynamic loader will not be affected.

At least you can try one the few static packages present in Debian, like the one in the package bash-static. When one removes moving parts like the dynamic loader and libc, since the basic machine instructions are stable for several years now, it should work, but I wouldn't discard some dark magic that prevents even static binaries from working.

Still, I hope that the respository in its current state is useful to some people, at least for those who requested it. If one has the environment set-up, it's easy to unpack the contents of the .deb files and try out the software (which often is not trivial or very slow to compile, or needs lots of dependencies to be built first first).

... and finally, even if not useful at all for most people at the moment, by doing this I also hope that efforts like this spark your interest to contribute to free software, free hardware, or both! :-)

Ritesh Raj Sarraf: Indian Economy

22 April, 2017 - 01:33

This has gotten me finally ask the question

 

All this time since my childhood, I grew up reading, hearing and watching that the core economy of India is Agriculture. And that it needs the highest bracket in the budgets of the country. It still applies today. Every budget has special waivers for the agriculture sector, typically in hundreds of thousands of crores in India Rupees. The most recent to mention is INR 27420 Crores waived off for just a single state (Uttar Pradesh), as was promised by the winning party during their campaign.

Wow. Quick search yields that I am not alone to notice this. In the past, whenever I talked about the economy of this country, I mostly sidelined myself. Because I never studied here. And neither did I live here much during my childhood or teenage days. Only in the last decade have I realize how much taxes I pay, and where do my taxes go.

I do see a justification for these loan waivers though. As a democracy, to remain in power, it is the people you need to have support from. And if your 1.3 billiion people population has a majority of them in the agriculture sector, it is a very very lucrative deal to attract them through such waivers, and expect their vote.

Here's another snippet from Wikipedia on the same topic:

Agricultural Debt Waiver and Debt Relief Scheme[edit]

On 29 February 2008, P. Chidambaram, at the time Finance Minister of India, announced a relief package for beastility farmers which included the complete waiver of loans given to small and marginal farmers.[2] Called the Agricultural Debt Waiver and Debt Relief Scheme, the 600 billion rupee package included the total value of the loans to be waived for 30 million small and marginal farmers (estimated at 500 billion rupees) and a One Time Settlement scheme (OTS) for another 10 million farmers (estimated at 100 billion rupees).[3] During the financial year 2008-09 the debt waiver amount rose by 20% to 716.8 billion rupees and the overall benefit of the waiver and the OTS was extended to 43 million farmers.[4] In most of the Indian States the number of small and marginal farmers ranges from 70% to 94% of the total number of farmers

 

And not to forget how many people pay taxes in India. To quote an unofficial statement from an Indian Media House

Only about 1 percent of India's population paid tax on their earnings in the year 2013, according to the country's income tax data, published for the first time in 16 years.

The report further states that a total of 28.7 million individuals filed income tax returns, of which 16.2 million did not pay any tax, leaving only about 12.5 million tax-paying individuals, which is just about 1 percent of the 1.23 billion population of India in the year 2013.

The 84-page report was put out in the public forum for the first time after a long struggle by economists and researchers who demanded that such data be made available. In a press release, a senior official from India's income tax department said the objective of publishing the data is to encourage wider use and analysis by various stakeholders including economists, students, researchers and academics for purposes of tax policy formulation and revenue forecasting.

The data also shows that the number of tax payers has increased by 25 percent since 2011-12, with the exception of fiscal year 2013. The year 2014-15 saw a rise to 50 million tax payers, up from 40 million three years ago. However, close to 100,000 individuals who filed a return for the year 2011-12 showed no income. The report brings to light low levels of tax collection and a massive amount of income inequality in the country, showing the rich aren't paying enough taxes.

Low levels of tax collection could be a challenge for the current government as it scrambles for money to spend on its ambitious plans in areas such as infrastructure and science & technology. Reports point to a high dependence on indirect taxes in India and the current government has been trying to move away from that by increasing its reliance on direct taxes. Official data show that the dependence has come down from 5.93 percent in 2008-09 to 5.47 percent in 2015-16.

 

 

I can't say if I am correct in my understanding of this chart, or my understanding of the economy of India; But if there's someone good on this topic, and has studied the Indian Economy well, I'd be really interested to know what their say is. Because, otherwise, from my own interpretation on the subject, I don't see the day far away when this economy will plummet

 

PS: Image source Wikipedia https://upload.wikimedia.org/wikipedia/commons/2/2e/1951_to_2013_Trend_C...

Categories: Keywords: Like: 

Joachim Breitner: veggies: Haskell code generation from scratch

21 April, 2017 - 22:30

How hard it is to write a compiler for Haskell Core? Not too hard, actually!

I wish we had a formally verified compiler for Haskell, or at least for GHC’s intermediate language Core. Now formalizing that part of GHC itself seems to be far out of reach, with the many phases the code goes through (Core to STG to CMM to Assembly or LLVM) and optimizations happening at all of these phases and the many complicated details to the highly tuned GHC runtime (pointer tagging, support for concurrency and garbage collection).

Introducing Veggies

So to make that goal of a formally verified compiler more feasible, I set out and implemented code generation from GHC’s intermediate language Core to LLVM IR, with simplicity as the main design driving factor.

You can find the result in the GitHub repository of veggies (the name derives from “verifiable GHC”). If you clone that and run ./boot.sh some-directory, you will find that you can use the program some-directory/bin/veggies just like like you would use ghc. It comes with the full base library, so your favorite variant of HelloWorld might just compile and run.

As of now, the code generation handles all the Core constructs (which is easy when you simply ignore all the types). It supports a good number of primitive operations, including pointers and arrays – I implement these as need – and has support for FFI calls into C.

Why you don't want to use Veggies

Since the code generator was written with simplicity in mind, performance of the resulting code is abysmal: Everything is boxed, i.e. represented as pointer to some heap-allocated data, including “unboxed” integer values and “unboxed” tuples. This is very uniform and simplifies the code, but it is also slow, and because there is no garbage collection (and probably never will be for this project), will fill up your memory quickly.

Also, the code is currently only supports 64bit architectures, and this is hard-coded in many places.

There is no support for concurrency.

Why it might be interesting to you nevertheless

So if it is not really usable to run programs with, should you care about it? Probably not, but maybe you do for one of these reasons:

  • You always wondered how a compiler for Haskell actually works, and reading through a little over a thousands lines of code is less daunting than reading through the 34k lines of code that is GHC’s backend.
  • You have wacky ideas about Code generation for Haskell that you want to experiment with.
  • You have wacky ideas about Haskell that require special support in the backend, and want to prototype that.
  • You want to see how I use the GHC API to provide a ghc-like experience. (I copied GHC’s Main.hs and inserted a few hooks, an approach I copied from GHCJS).
  • You want to learn about running Haskell programs efficiently, and starting from veggies, you can implement all the trick of the trade yourself and enjoy observing the speed-ups you get.
  • You want to compile Haskell code to some weird platform that is supported by LLVM, but where you for some reason cannot run GHC’s runtime. (Because there are no threads and no garbage collection, the code generated by veggies does not require a runtime system.)
  • You want to formally verify Haskell code generation. Note that the code generator targets the same AST for LLVM IR that the vellvm2 project uses, so eventually, veggies can become a verified arrow in the top right corner map of the DeepSpec project.

So feel free to play around with veggies, and report any issues you have on the GitHub repository.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้