Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 13 min ago

Shirish Agarwal: Density and accessibility

13 February, 2017 - 05:44

Around 2 decades back and a bit more I was introduced to computers. The top-line was 386DX which was mainly used as fat servers and some lucky institutions had the 386SX where IF we were lucky we could be able to play some games on it. I was pretty bad at Prince of Persia or most of the games of the era as most of the games depended on lightning reflexes which I didn’t possess. Then 1997 happened and I was introduced to GNU/Linux but my love of/for games still continued even though I was bad at most of them. The only saving grace was turn-based RPG’s (role-playing games) which didn’t have permadeath, so you could plan your next move. Sometimes a wrong decision would lead to getting a place from where it was impossible to move further. As the decision was taken far far break which lead to the tangent, the only recourse was to replay the game which eventually lead to giving most of those kind of games.

Then in/around 2000 Maxis came out with Sims. This was the time where I bought my first Pentium. I had never played a game which had you building stuff, designing stuff, no violence and the whole idea used to be about balancing priorities of trying to get new stuff, go to work, maintain relationships and still make sure you are eating, sleeping, have a good time. While I might have spent probably something close to 500 odd hours in the game or even more so, I spent the least amount of time in building the house. It used to be 4×4 when starting (you don’t have much of in-game money and other stuff you wanted to buy as well) to 8×8 or at the very grand 12×12. It was only the first time I spent time trying to figure out where the bathroom should be, where the kitchen should, where the bedroom should be and after that I could do that blind-folded. The idea behind my house-design used to be simplicity, efficient (for my character). I used to see other people’s grand creations of their houses and couldn’t understand why they made such big houses.

Now few days back, I saw few episodes of a game called ‘Stranded Deep’ . The story, plot is simple. You, the player are going in a chartered plane and suddenly lightning strikes ( game trope as today’s aircrafts are much better able to deal with lightning strikes) and our hero or heroine washes up on a beach with raft with the barest of possessions. Now the whole game is based upon him/her trying to survive, once you get the hang of the basic mechanics and you know what is to be done, you can do it. The only thing the game doesn’t have is farming but as the game has unlimited procedural world, you just paddle or with boat motor go island hopping and take all that what you need.

What was interesting to me was seeing a gamer putting so much time and passion in making a house.

When I was looking at that video, I realized that maybe because I live in a dense environment, even the designs we make either of houses or anything else is more to try to get more and more people rather than making sure that people are happy which leads to my next sharing.

Couple of days back, I read Virali Modi’s account of how she was molested three times when trying to use Indian Railways. She made a petition on change.org

While I do condemn the molestation as it’s an affront against individual rights, freedom, liberty, free movement, dignity.

Few of the root causes as pointed out by her, for instance the inability or non-preference to give differently-abled people the right to board first as well as awaiting to see that everybody’s boarded properly before starting the train are the most minimum steps that Indian Railways could take without spending even a paise. The same could be told/shared about sensitizing people, although I have an idea of why does Indian Railway not employ women porters or women attendants for precisely this job.

I accompanied a blind gentleman friend few times on Indian Railways and let me tell you, it was one of the most unpleasant experiences. The bogies which is given to them is similar or even less than what you see in unreserved compartments. The toilets were/are smelly, the gap between the station and the train was/is considerable for everybody from blind people, differently-abled people, elderly people as well. This is one of the causes of accidents which happen almost every day on Indian Railways. I also learnt that especially for blind people they are ‘looking’ for a sort of low-frequency whistle/noise which tells them the disabled coupe/bogie/coach will come at a specific spot in the Station. In a platform which could have anything between 1500-2000 people navigating it wouldn’t be easy. I don’t know about other places

The width of the entrance to the bogie is probably between 30-40 inches but the design is such that 5-10 inches are taken on each side. I remembered the last year, our current Prime Minister, Mr. Narendra Modi had launched Accessible Campaign with great fanfare and we didn’t hear anything much after that.

Unfortunately, the site itself has latency and accessibility issues, besides not giving any real advice even if a person wants to know what building norms should one follow if one wants to make an area accessible. This was easily seen by last year’s audit in Delhi as well as other places. A couple of web-searches later, I landed up at a Canadian site to have some idea about the width of the wheelchair itself as well as additional room to manoeuvre.

Unfortunately, the best or the most modern coaches/bogies that Indian Railways has to offer are the LHB Bogies/Coaches.

Now while the Coaches/Bogies by themselves are a big improvement from the ICF Coaches which we still have and use, if you read the advice and directions shared on the Canadian site, the coaches are far from satisfactory for people who are wheel-chair bound. According to Government’s own census records, 0.6% of the population have movement issues. Getting all the differently-abled people together, it comes between 2, 2.5% of the population which is quite a bit. If 2-3 people out of every 100 people are differently-abled then we need to figure out something for them.While I don’t have any ideas as to how we could improve the surroundings, it is clear that we need the change.

While I was thinking,dreaming,understanding some of the nuances inadvertently, my attention/memories shifted to my ‘toilet’ experiences at both Mumbai and the Doha Airport. As had shared then, had been pleasantly surprised to see that both in Mumbai Airport as well as the Doha Airport, the toilets were pretty wide, a part of me was happy and a part of me was seeing the added space as wastefulness. With the understanding of needs of differently-abled people it started to make whole lot of sense. I don’t remember if I had shared then or not. Although am left wondering where they go for loo in the aircraft. The regular toilets are a tight fit for obese people, I am guessing aircrafts have toilets for differently-abled people as well.

Looking back at last year’s conference, we had 2-3 differently-abled people. I am just guessing that it wouldn’t have been a pleasant experience for them. For instance, where we were staying, at UCT it had stairs, no lifts so by default they probably were on ground-floor. Then where we were staying and where most of the talks were about a few hundred metres away and the shortest distance were by stairs, the round-about way was by road but had vehicles around so probably not safe that way as well. I am guessing they had to be dependant on other people to figure out things. There were so many places where there were stairs and no ramps and even if there were ramps they were probably a bit more than the 1:12 which is the advice given.

I have heard that this year’s venue is also a bit challenging in terms of accessibility for differently-abled people. I am clueless as to did differently-able find debconf16 in terms of accessibility or not ? A related query to that one, if a Debconf’s final report mentions issues with accessibility, do the venues make any changes so that at some future date, differently-abled people would have a better time. I know of Indian institutions reluctance to change, to do expenditure, dunno how western countries do it. Any ideas, comments are welcome.


Filed under: Miscellenous Tagged: #386, #accessibility, #air-travel, #Computers, #differently-abled, #Railways, gaming

Dirk Eddelbuettel: Letting Travis keep a secret

13 February, 2017 - 00:24

More and more packages, be it for R or another language, are now interfacing different application programming interfaces (API) which are exposed to the web. And many of these may require an API key, or token, or account and password.

Which traditionally poses a problem in automated tests such as those running on the popular Travis CI service which integrates so well with GitHub. A case in point is the RPushbullet package where Seth Wenchel and I have been making a few recent changes and additions.

And yesterday morning, I finally looked more closely into providing Travis CI with the required API key so that we could in fact run continuous integration with unit tests following each commit. And it turns that it is both easy and quick to do, and yet another great showcase for ad-hoc Docker use.

The rest of this post will give a quick minimal run-down, this time using the gtrendsR package by Philippe Massicotte and myself. Start by glancing at the 'encrypting files' HOWTO from Travis itself.

We assume you have Docker installed, and a suitable base package. We will need Ruby, so any base Linux image will do. In what follows, I use Ubuntu 14.04 but many other Debian, Ubunti, Fedora, ... flavours could be used provided you know how to pick the relevant packages. What is shown here should work on any recent Debian or Ubuntu flavour 'as is'.

We start by firing off the Docker engine in the repo directory for which we want to create an encrypted file. The -v $(pwd):/mnt switch mounts the current directory as /mnt in the Docker instance:

edd@max:~/git/gtrendsr(master)$ docker run --rm -ti -v $(pwd):/mnt ubuntu:trusty
root@38b478356439:/# apt-get update    ## this takes a minute or two
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Get:2 http://archive.ubuntu.com trusty-security InRelease [65.9 kB]
# ... a dozen+ lines omitted ...
Get:21 http://archive.ubuntu.com trusty/restricted amd64 Packages [16.0 kB]    
Get:22 http://archive.ubuntu.com trusty/universe amd64 Packages [7589 kB]      
Fetched 22.4 MB in 6min 40s (55.8 kB/s)                                        
Reading package lists... Done
root@38b478356439:/# 

We then install what is needed to actually install the travis (Ruby) gem, as well as git which is used by it:

root@38b478356439:/# apt-get install -y ruby ruby-dev gem build-essential git
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
# ... lot of output ommitted ...
Processing triggers for ureadahead (0.100.0-16) ...
Processing triggers for sgml-base (1.26+nmu4ubuntu1) ...
root@38b478356439:/# 

This too may take a few minutes, depending on the networking bandwidth and other factors, and should in general succeed without the need for any intervention. Once it has concluded, we can use the now-complete infrastructure to install the travis command-line client:

root@38b478356439:/# gem install travis
Fetching: multipart-post-2.0.0.gem (100%)
Fetching: faraday-0.11.0.gem (100%)
Fetching: faraday_middleware-0.11.0.1.gem (100%)
Fetching: highline-1.7.8.gem (100%)
Fetching: backports-3.6.8.gem (100%)
Fetching: multi_json-1.12.1.gem (100%
# ... many lines omitted ...
Installing RDoc documentation for websocket-1.2.4...
Installing RDoc documentation for json-2.0.3...
Installing RDoc documentation for pusher-client-0.6.2...
Installing RDoc documentation for travis-1.8.6...
root@38b478356439:/#                        

This in turn will take a moment.

Once done, we can use the travis client to login into GitHub. In my base this requires a password and a two-factor authentication code. Also note that we switch directories first to be in the actual repo we had mounted when launching docker.

root@38b478356439:/# cd /mnt/    ## change to repo directory
root@38b478356439:/mnt# travis --login
Shell completion not installed. Would you like to install it now? |y| y
We need your GitHub login to identify you.
This information will not be sent to Travis CI, only to api.github.com.
The password will not be displayed.

Try running with --github-token or --auto if you don't want to enter your password anyway.

Username: eddelbuettel
Password for eddelbuettel: ****************
Two-factor authentication code for eddelbuettel: xxxxxx
Successfully logged in as eddelbuettel!
root@38b478356439:/mnt# 

Now the actual work of encrypting. For this particular package, we need a file .Rprofile containing a short option() segment setting a user-id and password:

root@38b478356439:/mnt# travis encrypt-file .Rprofile
Detected repository as PMassicotte/gtrendsR, is this correct? |yes| 
encrypting .Rprofile for PMassicotte/gtrendsR
storing result as .Rprofile.enc
storing secure env variables for decryption

Please add the following to your build script (before_install stage in your .travis.yml, for instance):

    openssl aes-256-cbc -K $encrypted_988d19a907a0_key -iv $encrypted_988d19a907a0_iv -in .Rprofile.enc -out .Rprofile -d

Pro Tip: You can add it automatically by running with --add.

Make sure to add .Rprofile.enc to the git repository.
Make sure not to add .Rprofile to the git repository.
Commit all changes to your .travis.yml.
root@38b478356439:/mnt#

That's it. Now we just need to follow-through as indicated, committing the .Rprofile.enc file, making sure to not commit its input file .Rprofile, and adding the proper openssl invocation with the keys known only to Travis to the file .travis.yml.

Stefano Zacchiroli: Opening the Software Heritage archive

12 February, 2017 - 21:03
... one API (and one FOSDEM) at a time

[ originally posted on the Software Heritage blog, reposted here with minor adaptations ]

Last Saturday at FOSDEM we have opened up the public API of Software Heritage, allowing to programmatically browse its archive.

We posted this while I was keynoting with Roberto at FOSDEM 2017, to discuss the role Software Heritage plays in preserving the Free Software commons. To accompany the talk we released our first public API, which allows to navigate the entire content of the Software Heritage archive as a graph of connected development objects (e.g., blobs, directories, commits, releases, etc.).

Over the past months we have been busy working on getting source code (with full development history) into the archive, to minimize the risk that important bits of Free/Open Sources Software that are publicly available today disappear forever from the net, due to whatever reason --- crashes, black hat hacking, business decisions, you name it. As a result, our archive is already one of the largest collections of source code in existence, spanning a GitHub mirror, injections of important Free Software collections such as Debian and GNU, and an ongoing import of all Google Code and Gitorious repositories.

Up to now, however, the archive was deposit-only. There was no way for the public to access its content. While there is a lot of value in archival per se, our mission is to Collect, Preserve, and Share all the material we collect with everybody. Plus, we totally get that a deposit-only library is much less exciting than a store-and-retrieve one! Last Saturday we took a first important step towards providing full access to the content of our archive: we released version 1 of our public API, which allows to navigate the Software Heritage archive programmatically.

You can have a look at the API documentation for full details about how it works. But to briefly recap: conceptually, our archive is a giant Merkle DAG connecting together all development-related objects we encounter while crawling public VCS repositories, source code releases, and GNU/Linux distribution packages. Examples of the objects we store are: file contents, directories, commits, releases; as well as their metadata, such as: log messages, author information, permission bits, etc.

The API we have just released allows to pointwise navigate this huge graph. Using the API you can lookup individual objects by their IDs, retrieve their metadata, and jump from one object to another following links --- e.g., from a commit to the corresponding directory or parent commits, from a release to the annotated commit, etc. Additionally, you can retrieve crawling-related information, such as the software origins we track (usually as VCS clone/checkout URLs), and the full list of visits we have done on any known software origin. This allows, for instance, to know when we took snapshots of a Git repository you care about and, for each visit, where each branch of the repo was pointing to at that time.

Our resources for offering the API as a public service are still quite limited. This is the reason why you will encounter a couple of limitations. First, no download of the actual content of files we have stored is possible yet --- you can retrieve all content-related metadata (e.g., checksums, detected file types and languages, etc.), but not the actual content as a byte sequence. Second, some pretty severe rate limits apply; API access is entirely anonymous and users are identified by their IP address, each "user" will be able to do a little bit more than 100 requests/hour. This is to keep our infrastructure sane while we grow in capacity and focus our attention to developing other archive features.

If you're interested in having rate limits lifted for a specific use case or experiment, please contact us and we will see what we can do to help.

If you'd like to contribute to increase our resource pool, have a look at our sponsorship program!

Elena 'valhalla' Grandi: Mobile-ish devices as freedom respecting working environments

12 February, 2017 - 17:05
Mobile-ish devices as freedom respecting working environments

On planet FSFE, there is starting to be a conversation on using tablets / Android as the main working platform.

It started with the article by Henri Bergius http://bergie.iki.fi/blog/working-on-android-2017/ which nicely covers all practical points, but is quite light on the issues of freedom.

This was rectified by the article by David Boddie http://www.boddie.org.uk/david/www-repo/Personal/Updates/2017/2017-02-11.html which makes an apt comparison of Android to “the platform it is replacing in many areas of work and life: Microsoft Windows” and criticises its lack of effective freedom, even when the OS was supposed to be under a free license.

I fully agree that lightweight/low powered hardware can be an excellent work environment, especially when on the go, and even for many kinds of software developement, but I'd very much rather have that hardware run an environment that I can trust like Debian (or another traditional GNU/Linux distribution) rather than the phone based ones where, among other problems, there is no clear distinction between what is local and trustable and what is remote and under somebody else's control.

In theory, it would be perfectly possible to run Debian on most tablet and tablet-like hardware, and have such an environment; in practice this is hard for a number of reasons including the lack of mainline kernel support for most hardware and the way actually booting a different OS on it usually ranges from the quite hard to the downright impossible.

Luckily, there is some niche hardware that uses tablet/phone SoCs but is sold with a GNU/Linux distribution and can be used as a freedom respecting work environment on-the-go: my current setup includes an OpenPandora https://en.wikipedia.org/wiki/Pandora_(console) (running Angstrom + a Debian chroot) and an Efika MX Smartbook https://en.wikipedia.org/wiki/Efika, but they are both showing their age badly: they have little RAM (especially the Pandora), and they aren't fully supported by a mainline kernel, which means that you're stuck on an old kernel and dependent on the producer for updates (which for the Efika ended quite early; at least the Pandora is still somewhat supported, at least for bugfixes).

Right now I'm looking forward to two devices as a replacement: the DragonBox Pyra https://en.wikipedia.org/wiki/DragonBox_Pyra (still under preorders) and the THERES-I laptop kit https://www.olimex.com/Products/DIY%20Laptop/ (hopefully available for sale "in a few months", and with no current mainline support for the SoC, but there is hope to see it from the sunxi community http://linux-sunxi.org/Main_Page).

As for software, the laptop/clamshell designs means that using a regular Desktop Environment (or, in my case, Window Manager) works just fine; I do hope that the availability of Pyra (with its touchscreen and 4G/"phone" chip) will help to give a bit of life back to the efforts to improve mobile software on Debian https://wiki.debian.org/Mobile

Hopefully, more such devices will continue to be available, and also hopefully the trend for more openness of the hardware itself will continue; sadly I don't see this getting outside of a niche market in the next few years, but I think that this niche will remain strong enough to be sustainable.

P.S. from nitpicker-me: David Boddie mentions the ability to easily download sources for any component with apt-get source: the big difference IMHO is given by apt-get build-dep, which also install every dependency needed to actually build the code you have just downloaded.

P.S.2: I also agree with Davide Boddie that supporting Conservancy https://sfconservancy.org/supporter/ is very important, and there are still a few hours left to have the contribution count twice.

Iustin Pop: Fine art printing—at home

12 February, 2017 - 07:46
Fine art printing—at home

It is very interesting how people change over time. Way back in the analog film era, I was using a very cheap camera, and getting the film developed and pictures printed at random places in town. As the movement towards digital began, I started dreaming of a full digital workflow—take picture, download from camera, enjoy on your monitor. No more pesky physical stuff. And when I finally got a digital camera, I was oh-so-happy to finally get rid of films and prints.

But time passes, and a few years back though, at the end of 2013, I had the misfortune to learn on various photography forums that, within certain limits, one can do high quality printing at home—quality high enough for serious prints. I always imagined that "serious" prints can only happen on big, professional stuff, but to my surprise, what I was reading was that many professional photographers can do their prints themselves (for certain paper sizes). I tried before printing photos on my laser printer that I wrote about, but that is a hilarious exercise, nothing more. Thinking process was pretty simple:

  • another hobby? check!
  • new gear to learn? check!
  • something more palpable to do with my photos? good enough reason, check!

So I decided to get a photo printer. Because hey, one more printer was the thing I was missing the most.

Ink

The think with inkjet photo printers is that the bigger they are, the more cheaper the ink is, and the more optimised they are for large volume printing. The more optimisation for large volume, the worse the printers do if you don't print often enough, in the sense of dried ink. This means clogged heads, and each of the serious printer manufacturers (Canon, Epson, HP) deal in different ways with it; some by having extra, spare lines in the print head that replace the clogged ones, others have replaceable printer heads, others rely on wasting ink by trying to flush the ink lines, etc. Also within each manufacturer's lines, different printers behave differently. So one must take this into account—how often will you print? Of course I thought very often, but the truth is, this is just another hobby, so time is lacking, and I have weeks going by without turning the printer on.

And so, I did have some problems with dried ink, but minor I'd say; I only had once to run a "power cleaning", when due to real world I didn't have time to turn the printer on for months; I managed to choose a good printer in this regard. I never though computed how much ink I wasted with cleaning the heads ☺

Paper

Another issue with printing is the fact that the result is a physical object, outside of the digital realm. And the transition from digital to physical is tricky.

First, the printer itself and the ink are one relatively straightforward choice: decide (by whatever criteria you want) on the printer, and most printers at this level have one set of inks only. But the problem is: which paper?

And as I learned, since how the paper looks is a subjective thing, this is an endless topic…

  • first question: glossy or matte ink?
  • if glossy, which type of paper? actually glossy (uh, no), semi-gloss, pearl, satin?
  • if matte, are we talking about textured or smooth matte?
  • what weight? fine art paper that I tested can go from a very interesting 100gsm (almost like standard paper) Rice Paper, to 210, 286, 310 (quite standard), 325, 350 and finally towards 390-410 heavy canvas;
  • on the more professional side, do you care about lifetime of paper? if you choose yes, then take care of choosing paper with no OBA—optical brightening agents;
  • and if you really want to go deep, what base? cellulose, alpha-cellulose or cotton?

As you can see, this is really a bottomless pit. I made the mistake of buying lots of sample packs, thinking that settling on a specific paper will be an objective process, but no. Three years later, I have a few favourite papers, but I'm sure I could have almost randomly chosen them (read 3 reviews, choose) and not gotten objectively different results.

ICC profiles and processing

Another thing is that simply having the printer and the paper doesn't mean everything is fixed. Since printers are analog devices, there needs to be a printer and paper specific colour profile, so that you get (on paper) what you see on the screen (which also needs to be calibrated). So when choosing the printer you should be careful to choose one which is common enough that it has profiles, ideally profiles done by the paper manufacturer themselves. Or, you can go the more basic route, and calibrate the printer/paper combination yourself! I skipped that part though. However you get a profile, if you tell your photo processing application what is your display profile and your printer+paper profile, ideally you what you see is what you get, this time for real.

Except… that sometimes the gamut of colours in the picture can't be represented entirely in either the display nor the printer profile, so the display is an approximation, but a different one than your printer will do on paper. So you learn about relative and perceptual colorimetric conversions, and you read many blog posts about which one to use for what type of pictures (portraits have different needs than landscapes), and you wonder why did you chose this hobby?

Of course, you can somewhat avoid the previous two issues by going more old-school to black and white printing. This should be simple, right? Black and white, nothing more. Hah, you wish. Do you do the B&W conversion in your photo processing application, or in your printer? Some printers are renowned by their good B&W conversions, some not. If you print B&W, then the choice of papers also change, because some papers are just awesome at B&W, but only so-so for colours. So says the internet, at least.

But even if you solve all of the above, don't give up just yet, because there is still a little problem. Even if you send the right colours to the printer, the way a certain picture looks on paper is different than on screen. This circles somewhat back to paper choice (glossy type ink having deeper blacks than matte, for example) and colour-vs-b&w, but is a general issue: displays have better contrasts than paper (this doesn't mean the pictures are better looking on screen though). So you use the soft-proofing function, but it looks completely weird, and you learn that you need to learn how specific papers will differ from screen, and that sometimes you don't need any adjustment, sometimes you need a +15, which might mean another run of the same print.

You print, then what?

So you print. Nice, high quality print. All colours perfect!

And then what? First, you wait. Because ink, as opposed to laser toner, is not "done" once the paper is out of the printer. It has to dry, which is a process taking about 24 hours in its initial phase, and which you help along by doing some stuff. The ink settles during this time in the paper, and only after that you know what the final look of the print will be. Depending on what you plan to do with the print, you might want to lay a layer of protective stuff on top of it; a kind of protective film that will keep it in better shape over time, but which has the downside that a) it must definitely be applied after the ink has dried and the the paper has for sure finished outgassing and b) it's a semi-hard layer, so you can roll the paper anymore (if you were planning to do that for transport). Or you say damn it, this is anyway a poor picture…

So with the print all good and really in its final state, you go on and research what solutions are there for hanging prints at home. And look at frames, and think about behind-glass framing or no glass-framing, and and and… and you realise that if you just printed your photos at a lab, they'd come directly framed!

I still have the really minimalist hanging solution that I bought a year ago unpacked 😕 Getting there, sometime!

Costs/economic sense

If you think all this effort is done in order to save money on prints, the answer is "Ha ha ha" ☺

While professional prints at a lab are expensive, how much do you think all the above (printer, inks, paper, framing, TIME) costs? A lot. It's definitely not worth unless your day job is photography.

No, for me it was more the desire to own the photographic process from start to end: learn enough to be able to choose everything (camera which implies sensor which implies a lot of things, lens, post-processing, printer/ink, paper), and see (and have) the end result of your work in your hands.

Is it worth all the trouble?

Fast forward three years later, I still have the printer, although many times I was thinking of getting rid of it.

It takes space, it costs some money (although you don't realise this as you print, since you already sunk the money in consumables), it takes time.

Being able to print small photos for family (e.g. 10×15) is neat, but a small printer can do this as well, or you can order prints online, or print them from a memory card at many places. Being able to print A4-size (for which framing for e.g. desk-use is a pain) is also neat, but here there are still simpler solutions than your own big printer.

The difference is when you print large. You look at the picture on your big screen, you think/imagine how it will look printer, and then you fire an A2 print.

The printer starts, makes noises for about 10 minutes, and then you have the picture in your hands. The ink is still fresh (you know it takes 24 hours to settle), and has that nice ink smell that you don't get anymore in day to day life. With a good paper and a good printer, the way the picture looks is so special, that all the effort seems trivial now.

I don't know how looking at pictures on an 8K 30+ inch monitor will be; but there's an indescribable difference between back-lighted LCD and paper for the same picture. Even at the same relative size, the paper is real, while the picture is virtual. You look at the people in the picture on your display, whereas the people in the print look at you.

Maybe this is just size. A2 is bigger than my monitor… wait, no. A2 has a diagonal of ~29 inches (vs. the display at 30"). Maybe it's resolution? An A2 print out of D810 is small enough to still have good resolution (it's about 320dpi after the small cropping needed for correcting the aspect ratio, which matches the printer's native 360dpi resolution). Coupled with a good printer, it's more than high enough resolution that even with a loupe, there's enough detail in the picture to not see its "digital" history (i.e. no rasterization, no gradients, etc.) Note that 360dpi for photo inkjet printers is much different from 600-1200dpi for laser printers (which are raster-based, not ink droplet based, so it's really not comparable). In any case, the print, even at this (relatively large) size, feels like a reflection of reality. On the monitor, it still feels like a digital picture. I could take a picture of the print to show you, but that would defeat the point, wouldn't it 😜

And this is what prompted this blog post. I had a pretty intense week at work, so when the weekend came, I was thinking what to do to disconnect and relax. I had a certain picture (people, group photo) that I wanted to print for a while, and it was OK on the screen, but not special. I said, somewhat not very enthusiastic, let's print it. And as the printer was slowly churning along, and the paper was coming out, I remembered why I don't get rid of the printer. Because every time I think about doing that, I say to myself "let's do one more print", which quickly turns into "wow, not, I'm keeping it". Because, even as our life migrates into the digital/virtual realm—or maybe more so—we're still living in the real world, and our eyes like to look at real objects.

And hey, on top of that, it was and still is a pretty intense learning experience!

Niels Thykier: On making Britney smarter

12 February, 2017 - 00:28

Updating Britney often makes our life easier. Like:

Concretely, transitions have become a lot easier.  When I joined the release team in the summer 2011, about the worst thing that could happen was discovering that two transitions had become entangled. You would have to wait for everything to be ready to migrate at the same time and then you usually also had to tell Britney what had to migrate together.

Today, Britney will often (but not always) de-tangle the transitions on her own and very often figure out how to migrate packages without help. The latter is in fact very visible if you know where to look.  Behold, the number of manual “easy” and “hint”-hints by RT members per year[2]:

Year | Total | easy | hint
-----+-------+------+-----
2005 |   53  |   30 |  23 
2006 |  146  |   74 |  72
2007 |   70  |   40 |  30
2008 |  113  |   68 |  45
2009 |  229  |  171 |  58
2010 |  252  |  159 |  93
2011 |  255  |  118 | 137
2012 |   29  |   21 |   8
2013 |   36  |   30 |   6
2014 |   20  |   20 |   0
2015 |   25  |   17 |   8
2016 |   16  |   11 |   5
2017 |    1  |    1 |   0

As can be seen, the number of manual hints drop by factor of ~8.8 between 2011 and 2012. Now, I have not actually done a proper statistical test of the data, but I have a hunch that drop was “significant” (see also [3] for a very short data discussion).

 

In conclusion: Smooth-updates (which was enabled late in 2011) have been a tremendous success.

 

[1] A very surprising side-effect of that commit was that the (“original”) auto-hinter could now solve a complicated haskell transition. Turns out that it works a lot better, when you give correct information!

[2] As extracted by the following script and then manually massaged into an ASCII table. Tweak the in-line regex to see different hints.

respighi.d.o$ cd "/home/release/britney/hints" && perl -E '
    my (%years, %hints);
    while(<>) { 
        chomp;
        if (m/^\#\s*(\d{4})(?:-?\d{2}-?\d{2});/ or m/^\#\s*(?:\d+-\d+-\d+\s*[;:]?\s*)?done\s*[;:]?\s*(\d{4})(?:-?\d{2}-?\d{2})/) {
             $year = $1; next;
         }
         if (m/^((?:easy|hint) .*)/) {
             my $hint = $1; $years{$year}++ if defined($year) and not $hints{$hint}++;
             next;
         }
         if (m/^\s*$/) { $year = undef; next; }
    };
    for my $year (sort(keys(%years))) { 
        my $count = $years{$year};
        print "$year: $count\n"
    }' * OLD/jessie/* OLD/wheezy/* OLD/Lenny/* OLD/*

[3]  I should probably mention for good measure that extraction is ignoring all hints where it cannot figure out what year it was from or if it is a duplicate.  Notable it is omitting about 100 easy/hint-hints from “OLD/Lenny” (compared to a grep -c), which I think accounts for the low numbers from 2007 (among other).

Furthermore, hints files are not rotated based on year or age, nor am I sure we still have all complete hints files from all members.


Filed under: Debian, Release-Team

Reproducible builds folks: Reproducible Builds: week 93 in Stretch cycle

11 February, 2017 - 19:23

Here's what happened in the Reproducible Builds effort between Sunday January 29 and Saturday February 4 2017:

Media coverage

Dennis Gilmore and Holger Levsen presented "Reproducible Builds and Fedora" (Video, Slides) at Devconf.cz on February 27th 2017.

On February 1st, stretch/armhf reached 90% reproducible packages in our test framework, so that now all four tested architectures are ≥ 90% reproducible in stretch. Yay! For armhf this means 22472 reproducible source packages (in main); for amd64, arm64 and i386 these figures are 23363, 23062 and 22607 respectively.

Chris Lamb appeared on the Changelog podcast to talk about reproducible builds:

Holger Levsen pitched Reproducible Builds and our need for a logo in the "Open Source Design" room at FOSDEM 2017 (Video, 09:36 - 12:00).

Upcoming Events
  • The Reproducible Build Zoo will be presented by Vagrant Cascadian at the Embedded Linux Conference in Portland, Oregon, February 22nd.

  • Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th.

  • Verifying Software Freedom with Reproducible Builds will be presented by Vagrant Cascadian at Libreplanet2017 in Boston, March 25th-26th.

Reproducible work in other projects

We learned that the "slightly more secure" Heads firmware (a Coreboot payload) is now reproducibly built regardless of host system or build directory. A picture says more than a thousand words:

Docker started preliminary work on making image builds reproducible.

Toolchain development and fixes

Ximin Luo continued to write code and test cases for the BUILD_PATH_PREFIX_MAP environment variable. He also did extensive research on cross-platform and cross-language issues with enviroment variables, filesystem paths, and character encodings, and started preparing a draft specification document to describe all of this.

Chris Lamb asked CPython to implement an environment variable PYTHONREVERSEDICTKEYORDER to add an an option to reverse iteration order of items in a dict. However this was rejected because they are planning to formally fix this order in the next language version.

Bernhard Wiedemann and Florian Festi added support for our SOURCE_DATE_EPOCH environment variable, to the RPM Package Manager.

James McCoy uploaded devscripts 2.17.1 with a change from Guillem Jover for dscverify(1), adding support for .buildinfo files. (Closes: #852801)

Piotr Ożarowski uploaded dh-python 2.20170125 with a change from Chris Lamb for a patch to fix #835805.

Chris Lamb added documentation to diffoscope, strip-nondeterminism, disorderfs, reprotest and trydiffoscope about uploading signed tarballs when releasing. He also added a link to these on our website's tools page.

Packages reviewed and bugs filed

Bugs filed:

Reviews of unreproducible packages

83 package reviews have been added, 86 have been updated and 276 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris Lamb (6)
diffoscope development

Work on the next version (71) continued in git this week:

  • Mattia Rizzolo:
    • Override a lintian warning.
  • Chris Lamb:
    • Update and consolidate documentation
    • Many test additions and improvements
    • Various code quality and software architecture improvements
  • anthraxx:
    • Update arch package, cdrkit -> cdrtools.
reproducible-website development

Daniel Shahaf added more notes on our "How to chair a meeting" document.

tests.reproducible-builds.org

Holger unblacklisted pspp and tiledarray. If you think further packages should also be unblacklisted (possibly only on some architectures), please tell us.

Misc.

This week's edition was written by Ximin Luo, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Mark Brown: We show up

11 February, 2017 - 19:12

It’s really common for pitches to managements within companies about Linux kernel upstreaming to focus on cost savings to vendors from getting their code into the kernel, especially in the embedded space. These benefits are definitely real, especially for vendors trying to address the general market or extend the lifetime of their devices, but they are only part of the story. The other big thing that happens as a result of engaging upstream is that this is a big part of how other upstream developers become aware of what sorts of hardware and use cases there are out there.

From this point of view it’s often the things that are most difficult to get upstream that are the most valuable to talk to upstream about, but of course it’s not quite that simple as a track record of engagement on the simpler drivers and the knowledge and relationships that are built up in that process make having discussions about harder issues a lot easier. There are engineering and cost benefits that come directly from having code upstream but it’s not just that, the more straightforward upstreaming is also an investment in making it easier to work with the community solve the more difficult problems.

Fundamentally Linux is made by and for the people and companies who show up and participate in the upstream community. The more ways people and companies do that the better Linux is likely to meet their needs.

Noah Meyerhans: Using FAI to customize and build your own cloud images

11 February, 2017 - 14:42

At this past November's Debian cloud sprint, we classified our image users into three broad buckets in order to help guide our discussions and ensure that we were covering the common use cases. Our users fit generally into one of the following groups:

  1. People who directly launch our image and treat it like a classic VPS. These users most likely will be logging into their instances via ssh and configuring it interactively, though they may also install and use a configuration management system at some point.
  2. People who directly launch our images but configure them automatically via launch-time configuration passed to the cloud-init process on the agent. This automatic configuration may optionally serve to bootstrap the instance into a more complete configuration management system. The user may or may not ever actually log in to the system at all.
  3. People who will not use our images directly at all, but will instead construct their own image based on ours. They may do this by launching an instance of our image, customizing it, and snapshotting it, or they may build a custom image from scratch by reusing and modifying the tools and configuration that we use to generate our images.

This post is intended to help people in the final category get started with building their own cloud images based on our tools and configuration. As I mentioned in my previous post on the subject, we are using the FAI project with configuration from the fai-cloud-images. It's probably a good idea to get familiar with FAI and our configs before proceeding, but it's not necessary.

You'll need to use FAI version 5.3.4 or greater. 5.3.4 is currently available in stretch and jessie-backports. Images can be generated locally on your non-cloud host, or on an existing cloud instance. You'll likely find it more convenient to use a cloud instance so you can avoid the overhead of having to copy disk images between hosts. For the most part, I'll assume throughout this document that you're generating your image on a cloud instance, but I'll highlight the steps where it actually matters. I'll also be describing the steps to target AWS, though the general workflow should be similar if you're targeting a different platform.

To get started, install the fai-server package on your instance and clone the fai-cloud-images git repository. (I'll assume the repository is cloned to /srv/fai/config.) In order to generate your own disk image that generally matches what we've been distributing, you'll use a command like:

sudo fai-diskimage --hostname stretch-image --size 8G \
--class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2 \
/tmp/stretch-image.raw

This command will create an 8 GB raw disk image at /tmp/stretch-image.raw, create some partitions and filesystems within it, and install and configure a bunch of packages into it. Exactly what packages it installs and how it configures them will be determined by the FAI config tree and the classes provided on the command line. The package_config subdirectory of the FAI configuration contains several files, the names of which are FAI classes. Activating a given class by referencing it on the fai-diskimage command line instructs FAI to process the contents of the matching package_config file if such a file exists. The files use a simple grammar that provides you with the ability to request certain packages to be installed or removed.

Let's say for example that you'd like to build a custom image that looks mostly identical to Debian's images, but that also contains the Apache HTTP server. You might do that by introducing a new file to package_config/HTTPD file, as follows:

PACKAGES install
apache2

Then, when running fai-diskimage, you'll add HTTPD to the list of classes:

sudo fai-diskimage --hostname stretch-image --size 8G \
--class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2,HTTPD \
/tmp/stretch-image.raw

Aside from custom package installation, you're likely to also want custom configuration. FAI allows the use of pretty much any scripting language to perform modifications to your image. A common task that these scripts may want to perform is the installation of custom configuration files. FAI provides the fcopy tool to help with this. Fcopy is aware of FAI's class list and is able to select an appropriate file from the FAI config's files subdirectory based on classes. The scripts/EC2/10-apt script provides a basic example of using fcopy to select and install an apt sources.list file. The files/etc/apt/sources.list/ subdirectory contains both an EC2 and a GCE file. Since we've enabled the EC2 class on our command line, fcopy will find and install that file. You'll notice that the sources.list subdirectory also contains a preinst file, which fcopy can use to perform additional actions prior to actually installing the specified file. postinst scripts are also supported.

Beyond package and file installation, FAI also provides mechanisms to support debconf preseeding, as well as hooks that are executed at various stages of the image generation process. I recommend following the examples in the fai-cloud-images repo, as well as the FAI guide for more details. I do have one caveat regarding the documentation, however: FAI was originally written to help provision bare-metal systems, and much of its documentation is written with that use case in mind. The cloud image generation process is able to ignore a lot of the complexity of these environments (for example, you don't need to worry about pxeboot and tftp!) However, this means that although you get to ignore probably half of the FAI Guide, it's not immediately obvious which half it is that you get to ignore.

Once you've generated your raw image, you can inspect it by telling Linux about the partitions contained within, and then mount and examine the filesystems. For example:

admin@ip-10-0-0-64:~$ sudo partx --show /tmp/stretch-image.raw
NR START      END  SECTORS SIZE NAME UUID
 1  2048 16777215 16775168   8G      ed093314-01
admin@ip-10-0-0-64:~$ sudo partx -a /tmp/stretch-image.raw 
partx: /dev/loop0: error adding partition 1
admin@ip-10-0-0-64:~$ lsblk 
NAME      MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
xvda      202:0    0      8G  0 disk 
├─xvda1   202:1    0 1007.5K  0 part 
└─xvda2   202:2    0      8G  0 part /
loop0       7:0    0      8G  0 loop 
└─loop0p1 259:0    0      8G  0 loop 
admin@ip-10-0-0-64:~$ sudo mount /dev/loop0p1 /mnt/
admin@ip-10-0-0-64:~$ ls /mnt/
bin/   dev/  home/        initrd.img.old@  lib64/       media/  opt/   root/  sbin/  sys/  usr/  vmlinuz@
boot/  etc/  initrd.img@  lib/             lost+found/  mnt/    proc/  run/   srv/   tmp/  var/  vmlinuz.old@

In order to actually use your image with your cloud provider, you'll need to register it with them. Strictly speaking, these are the only steps that are provider specific and need to be run on your provider's cloud infrastructure. AWS documents this process in the User Guide for Linux Instances. The basic workflow is:

  1. Attach a secondary EBS volume to your EC2 instance. It must be large enough to hold the raw disk image you created.
  2. Use dd to write your image to the secondary volume, e.g. sudo dd if=/tmp/stretch-image.raw of=/dev/xvdb
  3. Use the volume-to-ami.sh script in the fail-cloud-image repo to snapshot the volume and register the resulting snapshot with AWS as a new AMI. Example: ./volume-to-ami.sh vol-04351c30c46d7dd6e

The volume-to-ami.sh script must be run with access to AWS credentials that grant access to several EC2 API calls: describe-snapshots, create-snapshot, and register-image. It recognizes a --help command-line flag and several options that modify characteristics of the AMI that it registers. When volume-to-ami.sh completes, it will print the AMI ID of your new image. You can now work with this image using standard AWS workflows.

As always, we welcome feedback and contributions via the debian-cloud mailing list or #debian-cloud on IRC.

Jonas Meurer: debian lts report 2017.01

10 February, 2017 - 23:07
Debian LTS report for January 2017

January 2017 was my fifth month as a Debian LTS team member. I was allocated 12 hours and had 6,75 hours left over from December 2016. This makes a total of 18,75 hours. Unfortunately I found less time than expected to work on Debian LTS in January. In total, I spent 9 hours on the following security updates:

  • DLA 787-1: XSS protection via Content Security Policy for otrs2
  • DLA 788-1: fix vulnerability in pdns-recursor by dropping illegitimate long querys
  • DLA 798-1: fix multiple vulnerabilities in pdns
Links

Rhonda D'Vine: Anouk

10 February, 2017 - 19:19

I need music to be more productive. Sitting in an open workspace it helps to shut off outside noice too. And often enough I just turn cmus into shuffle mode and let it play what comes along. Yesterday I just stumbled upon a singer again that I fell in love with her voice a long time ago. This is about Anouk.

The song was on a compilation series that I followed because it so easily brought great groups to my attention in a genre that I simply love. It was called "Crossing All Over!" and featured several groups that I digged further into and still love to listen to.

Anyway, don't want to delay the songs for you any longer, so here they are:

  • Nobody's Wife: The first song I heard from her, and her voice totally catched me.
  • Lost: A more quite song for a break.
  • Modern World: A great song about the toxic beauty norms that society likes to paint. Lovely!

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

Dirk Eddelbuettel: anytime 0.2.1

10 February, 2017 - 18:37

An updated anytime package arrived at CRAN yesterday. This is release number nine, and the first with a little gap to the prior release on Christmas Eve as the features are stabilizing, as is the implementation.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, ... format to either POSIXct or Date objects -- and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This releases addresses two small things related to the anydate() and utcdate() conversion (see below) and adds one nice new format, besides some internal changes detailed below:

R> library(anytime)
R> anytime("Thu Sep 01 10:11:12 CDT 2016")
[1] "2016-09-01 10:11:12 CDT"
R> anytime("Thu Sep 01 10:11:12.123456 CDT 2016") # with frac. seconds
[1] "2016-09-01 10:11:12.123456 CDT"
R> 

Of course, all commands are also fully vectorised. See the anytime page, or the GitHub README.md for more examples.

Changes in anytime version 0.2.1 (2017-02-09)
  • The new DatetimeVector class from Rcpp is now used, and proper versioned Depends: have been added (#43)

  • The anydate and utcdate functions convert again from factor and ordered (#46 closing #44)

  • A format similar to RFC 28122 but with additonal timezone text can now be parsed (#48 closing #47)

  • Conversion from POSIXt to Date now also respect the timezone (#50 closing #49)

  • The internal .onLoad functions was updated

  • The Travis setup uses https to fetch the run script

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steve Kemp: Old packages are interesting.

9 February, 2017 - 21:29

Recently Vincent Bernat wrote about writing his own simple terminal, using vte. That was a fun read, as the sample code built really easily and was functional.

At the end of his post he said :

evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don’t remember why I didn’t pick it.

That set me off looking at evilvte, and it was one of those rare projects which seems to be pretty stable, and also hasn't changed in any recent release of Debian GNU/Linux:

  • lenny had 0.4.3-1.
  • etch had nothing.
  • squeeze had 0.4.6-1.
  • wheezy has release 0.5.1-1.
  • jessie has release 0.5.1-1.
  • stretch has release 0.5.1-1.
  • sid has release 0.5.1-1.

I wonder if it would be possible to easily generate a list of packages which have the same revision in multiple distributions? Anyway I had a look at the source, and unfortunately spotted that it didn't entirely handle clicking on hyperlinks terribly well. Clicking on a link would pretty much run:

 firefox '%s'

That meant there was an obvious security problem.

It is a great terminal though, and it just goes to show how short, simple, and readable such things can be. I enjoyed looking at the source, and furthermore enjoyed using it. Unfortunately due to a dependency issue it looks like this package will be removed from stretch.

Charles Plessy: Beware of libinput 1.6.0-1

9 February, 2017 - 20:22

Since I updated this evening, touch to click with my touchpad is almost totally broken. Fortunately, a correction is pending.

Sven Hoexter: Limit host access based on LDAP groupOfUniqueNames with sssd

9 February, 2017 - 19:02

For CentOS 4 to CentOS 6 we used pam_ldap to restrict host access to machines, based on groupOfUniqueNames listed in an openldap. With RHEL/CentOS 6 RedHat already deprecated pam_ldap and highly recommended to use sssd instead, and with RHEL/CentOS 7 they finally removed pam_ldap from the distribution.

Since pam_ldap supported groupOfUniqueNames to restrict logins a bigger collection of groupOfUniqueNames were created to restrict access to all kind of groups/projects and so on. But sssd is in general only able to filter based on an "ldap_access_filter" or use the host attribute via "ldap_user_authorized_host". That does not allow the use of "groupOfUniqueNames". So to allow a smoth migration I had to configure sssd in some way to still support groupOfUniqueNames. The configuration I ended up with looks like this:

[domain/hostacl]
autofs_provider = none 
ldap_schema = rfc2307bis
# to work properly we've to keep the search_base at the highest level
ldap_search_base = ou=foo,ou=people,o=myorg
ldap_default_bind_dn = cn=ro,ou=ldapaccounts,ou=foo,ou=people,o=myorg
ldap_default_authtok = foobar
id_provider = ldap
auth_provider = ldap
chpass_provider = none
ldap_uri = ldaps://ldapserver:636
ldap_id_use_start_tls = false
cache_credentials = false
ldap_tls_cacertdir = /etc/pki/tls/certs
ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt
ldap_tls_reqcert = allow
ldap_group_object_class = groupOfUniqueNames
ldap_group_member = uniqueMember
access_provider = simple
simple_allow_groups = fraappmgmtt

[sssd]
domains = hostacl
services = nss, pam
config_file_version = 2

Important side note: With current sssd versions you're more or less forced to use ldaps with a validating CA chain, though hostnames are not required to match the CN/SAN so far.

Relevant are:

  • set the ldap_schema to rfc2307bis to use a schema that knows about groupOfUniqueNames at all
  • set the ldap_group_object_class to groupOfUniqueNames
  • set the the ldap_group_member to uniqueMember
  • use the access_provider simple

In practise what we do is match the member of the groupOfUniqueNames to the sssd internal group representation.

The best explanation about the several possible object classes in LDAP for group representation I've found so far is unfortunately in a german blog post. Another explanation is in the LDAP wiki. In short: within a groupOfUniqueNames you'll find a full DN, while in a posixGroup you usually find login names. Different kind of object class requires a different handling.

Next step would be to move auth and nss functionality to sssd as well.

Vincent Bernat: Integration of a Go service with systemd

9 February, 2017 - 15:00

Unlike other programming languages, Go’s runtime doesn’t provide a way to reliably daemonize a service. A system daemon has to supply this functionality. Most distributions ship systemd which would fit the bill. A correct integration with systemd is quite straightforward. There are two interesting aspects: readiness & liveness.

As an example, we will daemonize this service whose goal is to answer requests with nifty 404 errors:

package main

import (
    "log"
    "net"
    "net/http"
)

func main() {
    l, err := net.Listen("tcp", ":8081")
    if err != nil {
        log.Panicf("cannot listen: %s", err)
    }
    http.Serve(l, nil)
}

You can build it with go build 404.go.

Here is the service file, 404.service1:

[Unit]
Description=404 micro-service

[Service]
Type=notify
ExecStart=/usr/bin/404
WatchdogSec=30s
Restart=on-failure

[Install]
WantedBy=multi-user.target
Readiness§

The classic way for an Unix daemon to signal its readiness is to daemonize. Technically, this is done by calling fork(2) twice (which also serves other intents). This is a very common task and the BSD systems, as well as some other C libraries, supply a daemon(3) function for this purpose. Services are expected to daemonize only when they are ready (after reading configuration files and setting up a listening socket, for example). Then, a system can reliably initialize its services with a simple linear script:

syslogd
unbound
ntpd -s

Each daemon can rely on the previous one being ready to do its work. The sequence of actions is the following:

  1. syslogd reads its configuration, activates /dev/log, daemonizes.
  2. unbound reads its configuration, listens on 127.0.0.1:53, daemonizes.
  3. ntpd reads its configuration, connects to NTP peers, waits for clock to be synchronized2, daemonizes.

With systemd, we would use Type=fork in the service file. However, Go’s runtime does not support that. Instead, we use Type=notify. In this case, systemd expects the daemon to signal its readiness with a message to an Unix socket. go-systemd package handles the details for us:

package main

import (
    "log"
    "net"
    "net/http"

    "github.com/coreos/go-systemd/daemon"
)

func main() {
    l, err := net.Listen("tcp", ":8081")
    if err != nil {
        log.Panicf("cannot listen: %s", err)
    }
    daemon.SdNotify(false, "READY=1") // ❶
    http.Serve(l, nil)                // ❷
}

It’s important to place the notification after net.Listen() (in ❶): if the notification was sent earlier, a client would get “connection refused” when trying to use the service. When a daemon listens to a socket, connections are queued by the kernel until the daemon is able to accept them (in ❷).

If the service is not run through systemd, the added line is a no-op.

Liveness§

Another interesting feature of systemd is to watch the service and restart it if it happens to crash (thanks to the Restart=on-failure directive). It’s also possible to use a watchdog: the service sends watchdog keep-alives at regular interval. If it fails to do so, systemd will restart it.

We could insert the following code just before http.Serve() call:

go func() {
    interval, err := daemon.SdWatchdogEnabled(false)
    if err != nil || interval == 0 {
        return
    }
    for {
        daemon.SdNotify(false, "WATCHDOG=1")
        time.Sleep(interval / 3)
    }
}()

However, this doesn’t add much value: the goroutine is unrelated to the core business of the service. If for some reason, the HTTP part gets stuck, the goroutine will happily continue to send keep-alives to systemd.

In our example, we can just do a HTTP query before sending the keep-alive. The internal loop can be replaced with this code:

for {
    _, err := http.Get("http://127.0.0.1:8081") // ❸
    if err == nil {
        daemon.SdNotify(false, "WATCHDOG=1")
    }
    time.Sleep(interval / 3)
}

In ❸, we connect to the service to check if it’s still working. If we get some kind of answer, we send a watchdog keep-alive. If the service is unavailable or if http.Get() gets stuck, systemd will trigger a restart.

There is no universal recipe. However, checks can be split into two groups:

  • Before sending a keep-alive, you execute an active check on the components of your service. The keep-alive is sent only if all checks are successful. The checks can be internal (like in the above example) or external (for example, check with a query to the database).

  • Each component reports its status, telling if it’s alive or not. Before sending a keep-alive, you check the reported status of all components (passive check). If some components are late or reported fatal errors, don’t send the keep-alive.

If possible, recovery from errors (for example, with a backoff retry) and self-healing (for example, by reestablishing a network connection) is always better, but the watchdog is a good tool to handle the worst cases and avoid too complex recovery logic.

For example, if a component doesn’t know how to recover from an exceptional condition3, instead of using panic(), it could signal its situation before dying. Another dedicated component could try to resolve the situation by restarting the faulty component. If it fails to reach an healthy state in time, the watchdog timer will trigger and the whole service will be restarted.

  1. Depending on the distribution, this should be installed in /lib/systemd/system or /usr/lib/systemd/system. Check with the output of the command pkg-config systemd --variable=systemdsystemunitdir. 

  2. This highly depends on the NTP daemon used. OpenNTPD doesn’t wait unless you use the -s option. ISC NTP doesn’t either unless you use the --wait-sync option. 

  3. An example of an exceptional condition is to reach the limit on the number of file descriptors. Self-healing from this situation is difficult and it’s easy to get stuck in a loop. 

Dirk Eddelbuettel: RcppArmadillo 0.7.700.0.0

9 February, 2017 - 08:24

Time for another update of RcppArmadillo with a new release 0.7.700.0.0 based on a fresh Armadillo 7.700.0. Following my full reverse-dependency check of 318 package (commit of log here), CRAN took another day to check again.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 318 other packages on CRAN -- an increase of 20 just since the last CRAN release of 0.7.600.1.0 in December!

Changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.700.0.0 (2017-02-07)
  • Upgraded to Armadillo release 7.700.0 ("Rogue State")

    • added polyfit() and polyval()

    • added second form of log_det() to directly return the result as a complex number

    • added range() to statistics functions

    • expanded trimatu()/trimatl() and symmatu()/symmatl() to handle sparse matrice

Changes in RcppArmadillo version 0.7.600.2.0 (2017-01-05)
  • Upgraded to Armadillo release 7.600.2 (Coup d'Etat Deluxe)

    • Bug fix to memory allocation for fields

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Iustin Pop: Solarized colour theme

9 February, 2017 - 07:18
Solarized

A while back I was looking for some information on the web, and happened upon a blog post about the subject. I don't remember what I was looking for, but on the same blog, there was a screen shot of what I then learned was the Solarized theme. This caught my eye that I decided to try it myself ASAP.

Up until last year, I've been using for many years the 'black on light yellow' xterm scheme. This is good during the day, but too strong during night, so on some machines I switched to 'white on black', but this was not entirely satisfying.

The solarized theme promises to have consistent colours over both light and dark background, which would help to make my setups finally consistent, and extends to a number of programs. Amongst these, there are themes for mutt on both light and dark backgrounds using only 16 colours. This was good, as my current hand-built theme is based on 256 colours, and this doesn't work well in the Linux console.

So I tried changing my terminal to the custom colours, played with it for about 10 minutes, then decided that its contrast is too low, bordering on unreadable. I switch to another desktop where I still had open an xterm using white-on-black, and—this being at night—my eyes immediately go 'no no no too high contrast'. In about ten minutes I got so used to it that the old theme was really really uncomfortable. There was no turning back now ☺

Interestingly, the light theme was not that much better than black-on-light-yellow, as that theme is already pretty well behaved. But I still migrated for consistency.

Programs/configs

Starting from the home page and the internet, I found resources for:

  • Vim and Emacs (for which I use the debian package elpa-solarized-theme).
  • Midnight Commander, for which I currently use peel's theme, although I'm not happy with it; interestingly, the default theme almost works on 16-custom-colours light terminal scheme, but not quite on the dark one.
  • Mutt, which is both in the main combined repository but also on the separate one. I'm not really happy with mutt's theme either, but that seems mostly because I was using a quite different theme before. I'll try to improve what I feel is missing over time.
  • dircolors; I found this to be an absolute requirement for good readability of ls --color, as the defaults are too bad
  • I also took the opportunity to unify my git diff and colordiff theme, but this was not really something that I found and took 'as-is' from some repository; I basically built my own theme.
16 vs 256 colours

The solarized theme/configuration can be done in two ways:

  • by changing the Xresources/terminal 16 basic colours to custom RGB values, or:
  • by using approximations from the fixed 256 colours available in the xterm-256color terminfo

Upstream recommends the custom ones, as they are precisely tuned, instead of using the approximated ones; honestly I don't know if they would make a difference. It's too bad upstream went silent a few years back, as technically it's possible to override also colours above 16 in the 256-colour palette, but in any case, each of the two options has its own cons:

  • using customised 16-colour means that all terminal programs get the new colours scheme, even if they were designed (colour-wise) based on the standard values; this makes some things pretty unreadable (hence the need to fix dircolors), but at least somewhat consistent.
  • using 256-colour palette, unchanged programs stay the same, but now they look very different than the programs that were updated to solarized; note thought I haven't tested this, but that's how I understand things would be.

So either way it's not perfect.

Desktop-wide consistency

Also not perfect is that for proper consistent look, many more programs would have to be changed; but I don't see that happening in today's world. I've seen for example 3 or 4 Midnight Commander themes, but none of them were actually in the spirit of solarized, even though they were tweaked for solarized.

Even between vim and emacs, which both have one canonical solarized theme, the look is close but not really the same (looking at the markdown source for this blog post: URLs, headers and spelling mistakes are all different), but this might be due not necessarily the theme itself.

So no global theme consistency (I'd wish), but still, I find this much better on the eyes and not lower on readability after getting used to it.

Thanks Ethan!

Manuel A. Fernandez Montecelo: FOSDEM 2017: People, RISC-V and ChaosKey

9 February, 2017 - 06:52

This year, for the first time, I attended FOSDEM.

There I met...

People

... including:

  • friends that I don't see very often;
  • old friends that I didn't expect to see there, some of whom decided to travel from far away in the last minute;
  • met people in person for the first time, which previously I had known only though the internet -- one of whom is a protagonist in a previous blog entry, about the Debian port for OpenRISC;

I met new people in:

  • bars/pubs,
  • restaurants,
  • breakfast tables at lodgings,
  • and public transport.

... from the first hour to the last hour of my stay in Brussels.

In summary, lots of people around.

I also hoped to meet or spend some (more) time with a few people, but in the end I didn't catch them, our could not spend as much time with them as I would wish.

For somebody like me who enjoys quiet time by itsef, it was a bit too intensive in terms of interacting with people. But overall it was a nice winter break, definitely worth to attend, and even a much better experience than what I had expected.

Talks / Events

Of course, I also attended a few talks, some of which were very interesting; although the event is so (sometimes uncomfortably) crowded that the rooms were full more often than not, in which case it was not possible to enter (the doors were closed) or there were very long queues for waiting.

And with so many talks crammed into a weekend, I had so many schedule clashes with the talks that I had pre-selected as interesting, that I ended up missing most of them.

In terms of technical stuff, I have specially enjoyed the talk by Arun Thomas RISC-V -- Open Hardware for Your Open Source Software, and some conversations related with toolchain stuff and other upstream stuff, as well as on the Debian port for RISC-V.

The talk Resurrecting dinosaurs, what can possibly go wrong? -- How Containerised Applications could eat our users, by Richard Brown, was also very good.

ChaosKey

Apart from that, I have witnessed a shady cash transaction in a bus from the city centre to FOSDEM in exchange for hardware, not very unlike what I had read about only days before.

So I could not help but to get involved in a subsequent transaction myself, to get my hands laid upon a ChaosKey.

Steve Kemp: Old packages are interesting.

9 February, 2017 - 05:00

Recently Vincent Bernat wrote about writing his own simple terminal, using vte. That was a fun read, as the sample code built really easily and was functional.

At the end of his post he said :

evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don’t remember why I didn’t pick it.

That set me off looking at evilvte, and it was one of those rare projects which seems to be pretty stable, and also hasn't changed in any recent release of Debian GNU/Linux:

  • lenny had 0.4.3-1.
  • etch had nothing.
  • squeeze had 0.4.6-1.
  • wheezy has release 0.5.1-1.
  • jessie has release 0.5.1-1.
  • stretch has release 0.5.1-1.
  • sid has release 0.5.1-1.

I wonder if it would be possible to easily generate a list of packages which have the same revision in multiple distributions? Anyway I had a look at the source, and unfortunately spotted that it didn't entirely handle clicking on hyperlinks terribly well. Clicking on a link would pretty much run:

 firefox '%s'

That meant there was an obvious security problem.

It is a great terminal though, and it just goes to show how short, simple, and readable such things can be. I enjoyed looking at the source, and furthermore enjoyed using it. Unfortunately due to a dependency issue it looks like this package will be removed from stretch.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้