Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 15 min ago

Robert Edmonds: Cable modems: Arris SB6190 vs. Netgear CM600

7 August, 2016 - 04:52

Recently I activated new cable ISP service at my home and needed to purchase a cable modem. There were only a few candidate devices that supported at least 24 downstream channels (preferably 32), and did not contain an integrated router or access point.

The first modem I tried was the Arris SB6190, which supports 32 downstream channels. It is based on the Intel Puma 6 SoC, and looking at an older release of the SB6190 firmware source reveals that the device runs Linux. This device, running the latest 9.1.93N firmware, goes into a failure mode after several days of uptime which causes it to drop 1-2% of packets. Here is a SmokePing graph that measures latency to my ISP's recursive DNS server, showing the transition into the “degraded” mode:

It didn't drop packets at random, though. Some traffic would be deterministically dropped, such as the parallel A/AAAA DNS lookups generated by the glibc DNS stub resolver. For instance, in the following tcpdump output:

[1] 17:31:46.989073 IP [My IP].50775 > 53571+ A? (34)
[2] 17:31:46.989102 IP [My IP].50775 > 14987+ AAAA? (34)
[3] 17:31:47.020423 IP > [My IP].50775: 53571 2/0/0 CNAME, […]
[4] 17:31:51.993680 IP [My IP].50775 > 53571+ A? (34)
[5] 17:31:52.025138 IP > [My IP].50775: 53571 2/0/0 CNAME, […]
[6] 17:31:52.025282 IP [My IP].50775 > 14987+ AAAA? (34)
[7] 17:31:52.056550 IP > [My IP].50775: 14987 2/0/0 CNAME, […]

Packets [1] and [2] are the A and AAAA queries being initiated in parallel. Note that they both use the same 4-tuple of (Source IP, Destination IP, Source Port, Destination Port), but with different DNS IDs. Packet [3] is the response to packet [1]. The response to packet [2] never arrives, and five seconds later, the glibc stub resolver times out and retries in single-request mode, which performs the A and AAAA queries sequentially. Packets [4] and [5] are the type A query and response, and packets [6] and [7] are the AAAA query and response.

The Arris SB6190 running firmware 9.1.93N would consistently interfere with these parallel DNS requests, but only when operating in its “degraded” mode. It also didn't matter whether glibc was configured to use an IPv4 or IPv6 nameserver, or which nameserver was being used. Power cycling the modem would fix the issue for a few days.

My ISP offered to downgrade the firmware on the Arris SB6190 to version 9.1.93K. This firmware version doesn't go into a degraded mode after a few days, but it does exhibit higher latency, and more jitter:

It seemed unlikely that Arris would fix the firmware issues in the SB6190 before the end of my 30-day return window, so I returned the SB6190 and purchased a Netgear CM600. This modem appears to be based on the Broadcom BCM3384 and looking at an older release of the CM600 firmware source reveals that the device runs the open source eCos embedded operating system.

The Netgear CM600 so far hasn't exhibited any of the issues I found with the Arris SB6190 modem. Here is a SmokePing graph for the CM600, which shows median latency about 1 ms lower than the Arris modem:

It's not clear which company is to blame for the problems in the Arris modem. Looking at the DOCSIS drivers in the SB6190 firmware source reveals copyright statements from ARRIS Group, Texas Instruments, and Intel. However, I would recommend avoiding cable modems produced by Arris in particular, and cable modems based on the Intel Puma SoC in general.

Norbert Preining: Debian/TeX Live 2016.20160805-1

7 August, 2016 - 04:31

TUG 2016 is over, and I have returned from a wonderful trip to Toronto and Maine. High time to release a new checkout of the TeX Live packages. After that I will probably need some time for another checkout, as there are a lot of plans on the table: upstream created a new collection, which means new package in Debian, which needs to go through NEW, and I am also planning to integrate tex4ht to give it an update. Help greatly appreciated here.

This package also sees the (third) revision of how config files for pdftex and luatex are structured, since then we have settled down. Hopefully this will close some of the issues that have appeared.

New packages

biblatex-ijsra, biblatex-nottsclassic, binarytree, diffcoeff, ecgdraw, fvextra, gitfile-info, graphics-def, ijsra, mgltex, milog, navydocs, nodetree, oldstandardt1, pdflatexpicscale, randomlist, texosquery

Updated packages

2up, acmart, acro, amsmath, animate, apa6, arabluatex, archaeologie, autobreak, beebe, biblatex-abnt, biblatex-gost, biblatex-ieee, biblatex-mla, biblatex-source-division, biblatex-trad, binarytree, bxjscls, changes, cloze, covington, cs, csplain, csquotes, csvsimple, datatool, datetime2, disser, dvipdfmx, dvips, emisa, epstopdf, esami, etex-pkg, factura, fancytabs, forest, genealogytree, ghsystem, glyphlist, gost, graphics, hyperref, hyperxmp, imakeidx, jadetex, japanese-otf, kpathsea, latex, lstbayes, luatexja, mandi, mcf2graph, mfirstuc, minted, oldstandard, optidef, parnotes, philosophersimprint, platex, protex, pst-pdf, ptex, pythontex, readarray, reledmac, sepfootnotes, sf298, skmath, skrapport, stackengine, sttools, tcolorbox, tetex, texinfo, texlive-docindex, texlive-es, texlive-scripts, thesis-ekf, tools, toptesi, tudscr, turabian-formatting, updmap-map, uplatex, uptex, velthuis, xassoccnt, ycbook.


Dirk Eddelbuettel: RcppStreams 0.1.1

6 August, 2016 - 09:54

A maintenance release of RcppStreams is now on CRAN. RcppStreams brings the excellent Streamulus C++ template library for event stream processing to R.

Streamulus, written by Irit Katriel, uses very clever template meta-programming (via Boost Fusion) to implement an embedded domain-specific event language created specifically for event stream processing.

This release updates the compilation standard to C++11 per CRAN's request as this helps with both current and upcoming compiler variants. A few edits were made to DESCRIPTION and, the Travis driver file was updated, but no new code was added.

The NEWS file entries follows below:

Changes in version 0.1.1 (2016-08-05)
  • Compilation is now done using C++11 standards per request of CRAN to help with an array of newer (pre-release) and older compilers

  • The Travis CI script was updated to use from our fork; it now also installs all dependencies as binary .deb files.

  • The was updated with additional badges.

  • The DESCRIPTION file now has URL and BugReports entries.

Courtesy of CRANberries, there is also a copy of the DESCRIPTION file for this initial release. More detailed information is on the RcppStreams page page and of course on the Streamulus page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Petter Reinholdtsen: Sales number for the Free Culture translation, first half of 2016

6 August, 2016 - 03:45

As my regular readers probably remember, the last year I published a French and Norwegian translation of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig. A bit less known is the fact that due to the way I created the translations, using docbook and po4a, I also recreated the English original. And because I already had created a new the PDF edition, I published it too. The revenue from the books are sent to the Creative Commons Corporation. In other words, I do not earn any money from this project, I just earn the warm fuzzy feeling that the text is available for a wider audience and more people can learn why the Creative Commons is needed.

Today, just for fun, I had a look at the sales number over at, which take care of payment, printing and shipping. Much to my surprise, the English edition is selling better than both the French and Norwegian edition, despite the fact that it has been available in English since it was first published. In total, 24 paper books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:

Title / languageQuantity Culture Libre / French3 Fri kultur / Norwegian7 Free Culture / English14

The books are available both from and from large book stores like Amazon and Barnes&Noble. Most revenue, around $10 per book, is sent to the Creative Commons project when the book is sold directly by The other channels give less revenue. The summary from Lulu tell me 10 books was sold via the Amazon channel, 10 via Ingram (what is this?) and 4 directly by Lulu. And tells me that the revenue sent so far this year is USD $101.42. No idea what kind of sales numbers to expect, so I do not know if that is a good amount of sales for a 10 year old book or not. But it make me happy that the buyers find the book, and I hope they enjoy reading it as much as I did.

The ebook edition is available for free from Github.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

Arturo Borrero González: Spawning a new blog with jekyllrb

5 August, 2016 - 12:00

I have been delighted with git for several years now. It's a very powerful tool and I use it every day.
I try to use git in all possible tasks: bind servers, configurations, firewalls, and other personal stuff.

However, there has been always a thing in my git-TODO: a blog managed with git.

After a bit of searching, I found an interesting technology: jekyllrb hosted at github pages. Jekyll looked easy to manage and easy to learn for a newbie like me.
There are some very good looking blogs out there using this combination, for example:

But I was lazy to migrate this 'ral-arturo' blog from blogger to jekyll, so I decided to create a new blog from scratch.

This time, the new blog in written in Spanish and is about adventures, nature, travels and outdoor sports.
Perhaps you noticed this article about the Mulhacen mountain (BTW, we did it! :-))
The new blog is called,

I like the workflow with git & jekyll & github:

  • clone the repository
  • write a new post in markdown
  • read it and correct it locally with 'jekyll serve'
  • commit and push to github
  • done!

Who knows, perhaps this 'ral-arturo' blog ends being migrated to the new system as well.

Steve Kemp: Using the compiler to help you debug segfaults

5 August, 2016 - 09:15

Recently somebody reported that my console-based mail-client was segfaulting when opening an IMAP folder, and then when they tried with a local Maildir-hierarchy the same fault was observed.

I couldn't reproduce the problem at all, as neither my development host (read "my personal desktop"), nor my mail-host had been crashing at all, both being in use to read my email for several months.

Debugging crashes with no backtrace, or real hint of where to start, is a challenge. Even when downloading the same Maildir samples I couldn't see a problem. It was only when I decided to see if I could add some more diagnostics to my code that I came across a solution.

My intention was to make it easier to receive a backtrace, by adding more compiler options:

  -fsanitize=address -fno-omit-frame-pointer

I added those options and my mail-client immediately started to segfault on my own machine(s), almost as soon as it started. Ultimately I found three pieces of code where I was allocating C++ objects and passing them to the Lua stack, a pretty fundamental part of the code, which were buggy. Once I'd tracked down the areas of code that were broken and fixed them the user was happy, and I was happy too.

Its interesting that I've been running for over a year with these bogus things in place, which "just happened" to not crash for me or anybody else. In the future I'll be adding these options to more of my C-based projects, as there seems to be virtually no downside.

In related news my console editor has now achieved almost everything I want it to, having gained:

  • Syntax highlighting via Lua + LPEG
  • Support for TAB completion of Lua-code and filenames.
  • Bookmark support.
  • Support for setting the mark and copying/cutting regions.

The only outstanding feature, which is a biggy, is support for Undo which I need to add.

Happily no segfaults here, so far..

Joey Hess: keysafe

5 August, 2016 - 07:24

Have you ever thought about using a gpg key to encrypt something, but didn't due to worries that you'd eventually lose the secret key? Or maybe you did use a gpg key to encrypt something and lost the key. There are nice tools like paperkey to back up gpg keys, but they require things like printers, and a secure place to store the backups.

I feel that simple backup and restore of gpg keys (and encryption keys generally) is keeping some users from using gpg. If there was a nice automated solution for that, distributions could come preconfigured to generate encryption keys and use them for backups etc. I know this is a missing peice in the git-annex assistant, which makes it easy to generate a gpg key to encrypt your data, but can't help you back up the secret key.

So, I'm thinking about storing secret keys in the cloud. Which seems scary to me, since when I was a Debian Developer, my gpg key could have been used to compromise millions of systems. But this is not about developers, it's about users, and so trading off some security for some ease of use may be appropriate. Especially since the alternative is no security. I know that some folks back up their gpg keys in the cloud using DropBox.. We can do better.

I've thought up a design for this, called keysafe. The synopsis of how it works is:

The secret key is split into three shards, and each is uploaded to a server run by a different entity. Any two of the shards are sufficient to recover the original key. So any one server can go down and you can still recover the key.

A password is used to encrypt the key. For the servers to access your key, two of them need to collude together, and they then have to brute force the password. The design of keysafe makes brute forcing extra difficult by making it hard to know which shards belong to you.

Indeed the more people that use keysafe, the harder it becomes to brute-force anyone's key!

I could really use some additional reviews and feedback on the design by experts.

This project is being sponsored by Purism and by my Patreon supporters. By the way, I'm 15% of the way to my Patreon goal after one day!

Phil Hands: EOMA68: > $60k pledged on

5 August, 2016 - 05:04 has a campaign to fund production of EOMA68 computer cards (and associated peripherals) which recently passed the $60,000 mark.

If you were at DebConf13 in Switzerland, you may have seen me with some early prototypes that I had been lent to show people.

The concept: build computers on a PCMCIA physical form-factor, thus confining most of the hardware and software complexity in a single replaceable item, decoupling the design of the outer device from the chips that drive it.

There is a lot more information about this at crowdsupply, and at -- I hope people find it interesting enough to sign up.

BTW While I host Rhombus Tech's website as a favour to Luke Leighton, I have no financial links with them.

Phil Hands: EOMA68: > $60k pledged on

5 August, 2016 - 05:04 has a campaign to fund production of EOMA68 computer cards (and associated peripherals) which recently passed the $60,000 mark.

If you were at DebConf13 in Switzerland, you may have seen me with some early prototypes that I had been lent to show people.

The inside of the A20 EOMA68 computer board

The concept: build computers on a PCMCIA physical form-factor, thus confining most of the hardware and software complexity in a single replaceable item, decoupling the design of the outer device from the chips that drive it.

EOMA68 pack-shot

There is a lot more information about this at crowdsupply, and at -- I hope people find it interesting enough to sign up.

BTW While I host Rhombus Tech's website as a favour to Luke Leighton, I have no financial links with them.

Daniel Kahn Gillmor: Changes for GnuPG in Debian

4 August, 2016 - 04:55

The GNU Privacy Guard (GnuPG) upstream team maintains three branches of development: 1.4 ("classic"), 2.0 ("stable"), and 2.1 ("modern").

They differ in various ways: software architecture, supported algorithms, network transport mechanisms, protocol versions, development activity, co-installability, etc.

Debian currently ships two versions of GnuPG in every maintained suite -- in particular, /usr/bin/gpg has historically always been provided by the "classic" branch.

That's going to change!

Debian unstable will soon be moving to the "modern" branch for providing /usr/bin/gpg. This will give several advantages for Debian and its users in the future, but it will require a transition. Hopefully we can make it a smooth one.

What are the benefits?

Compared to "classic", The "modern" branch has:

  • updated crypto (including elliptic curves)
  • componentized architecture (e.g. libraries, some daemonized processes)
  • improved key storage
  • better network access (including talking to keyservers over tor)
  • stronger defaults
  • more active upstream development
  • safer info representation (e.g. no more key IDs, fingerprints easier to copy-and-paste)

If you want to try this out, the changes are already made in experimental. Please experiment!

What does this mean for end users?

If you're an end user and you don't use GnuPG directly, you shouldn't notice much of a change once the packages start to move through the rest of the archive.

Even if you do use GnuPG regularly, you shouldn't notice too much of a difference. One of the main differences is that all access to your secret key will be handled through gpg-agent, which should be automatically launched as needed. This means that operations like signing and decryption will cause gpg-agent to prompt the the user to unlock any locked keys directly, rather than gpg itself prompting the user.

If you have an existing keyring, you may also notice a difference based on a change of how your public keys are managed, though again this transition should ideally be smooth enough that you won't notice unless you care to investigate more deeply.

If you use GnuPG regularly, you might want to read the NEWS file that ships with GnuPG and related packages for updates that should help you through the transition.

If you use GnuPG in a language other than English, please install the gnupg-l10n package, which contains the localization/translation files. For versions where those files are split out of the main package, gnupg explicitly Recommends: gnupg-l10n already, so it should be brought in for new installations by default.

If you have an archive of old data that depends on known-broken algorithms, PGP3 keys, or other deprecated material, you'll need to have "classic" GnuPG around to access it. That will be provided in the gnupg1 package

What does this mean for package maintainers?

If you maintain a package that depends on gnupg: be aware that the gnupg package in debian is going through this transition.

A few general thoughts:

  • If your package Depends: gnupg for signature verification only, you might prefer to have it Depends: gpgv instead. gpgv is a much simpler tool that the full-blown GnuPG suite, and should be easier to manage. I'm happy to help with such a transition (we've made it recently with apt already)

  • If your package Depends: gnupg and expects ~/.gnupg/ to be laid out in a certain way, that's almost certainly going to break at some point. ~/.gnupg/ is GnuPG's internal storage, and it's not recommended to rely on any specific data structures there, as they may change. gpg offers commands like --export, --import, and --delete for manipulating its persistent storage. please use them instead!

  • If your package depends on parsing or displaying gpg's output for the user, please make sure you use its special machine-readable form (--with-colons). Parsing the human-readable text is not advised and may change from version to version.

If you maintain a package that depends on gnupg2 and tries to use gpg2 instead of gpg, that should stay ok. However, at some point it'd be nice to get rid of /usr/bin/gpg2 and just have one expected binary (gpg). So you can help with that:

  • Look for places where your package expects gpg2 and make it try gpg instead. If you can make your code fall back cleanly

  • Change your dependencies to indicate gnupg (>= 2)

  • Patch lintian to encourage other people to make this switch ;)

What specifically needs to happen?

The last major step for this transition was renaming the source package for "classic" GnuPG to be gnupg1. This transition is currently in the ftp-master's NEW queue. Once it makes it through that queue, and both gnupg1 and gnupg2 have been in experimental for a few days without reports of dangerous breakage, we'll upload both gnupg1 and gnupg2 to unstable.

We'll also need to do some triage on the BTS, reassigning some reports which are really only relevant for the "classic" branch.

Please report bugs via the BTS as usual! You're also welcome to ask questions and make suggestions on #debian-gnupg on, or to mail the Debian GnuPG packaging team at

Happy hacking!

Bdale Garbee: ChaosKey

4 August, 2016 - 04:17

I'm pleased to announce that, at long last, the ChaosKey hardware random number generator described in talks at Debconf 14 in Portland and Debconf 16 in Cape Town is now available for purchase from Garbee and Garbee.

Keith Packard: chaoskey

4 August, 2016 - 04:16
ChaosKey v1.0 Released — USB Attached True Random Number Generator

ChaosKey, our random number generator that attaches via USB, is now available for sale from the altusmetrum store.

We talked about this device at Debconf 16 last month

Support for this device is included in Linux starting with version 4.1. Plug ChaosKey into your system and the driver will automatically add entropy into the kernel pool, providing a constant supply of true random numbers to help keep the system secure.

ChaosKey is free hardware running free software, built with free software on a free operating system.

Joey Hess: Patreon

4 August, 2016 - 02:08

I've been funded for two years by the DataLad project to work on git-annex. This has been a super excellent gig; they provided funding and feedback on ways git-annex could be improved, and I had a large amount of flexability to decide what to work on in git-annex. Also plenty of spare time to work on new projects like propellor, concurrent-output, and scroll. It was an awesome way to spend the last two years of my twenty years of free software.

That funding is running out. I'd like to continue this great streak of working on the free software projects that are important to me. I'd normally dip into my savings at this point and keep on going until some other source of funding turned up. But, my savings are about to be obliterated, since I'm buying the place where I've had so much success working distraction-free.

So, I've started a Patreon page to fund my ongoing work. Please check it out and contribute if you want to.

Some details about projects I want to work on this fall:

Elena 'valhalla' Grandi: The Cat Model of Package Ownership

4 August, 2016 - 01:02
The Cat Model of Package Ownership

Debian has been moving away from strong ownership of packages by package maintainers and towards encouraging group maintainership, for very good reasons: single maintainers have a bad bus factor and a number of other disadvantages.

When single maintainership is changed into maintainership by a small¹, open group of people who can easily communicate and sync with each other, everything is just better: there is an easy way to gradually replace people who want to leave, but there is also no duplication of efforts (because communication is easy), there are means to always have somebody available for emergency work and generally package quality can only gain from it.

Unfortunately, having such group of maintainers for every package would require more people than are available and willing to work on it, and while I think it's worth doing efforts to have big and important packages managed that way, it may not be so for the myriad of small ones that make up the long tail of a distribution.

Many of those packages may end up being maintained in a big team such as the language-based ones, which is probably better than remaining with a single maintainer, but can lead to some problems.

My experience with the old OpenEmbedded, back when it was still using monotone instead of git² and everybody was maintaining everything, however, leads me to think that this model has a big danger of turning into nobody maintains anything, because when something needs to be done everybody is thinking that somebody else will do it.

As a way to prevent that, I have been thinking in the general direction of a Cat Model of Package Ownership, which may or may not be a way to prevent some risks of both personal maintainership and big teams.

The basic idea is that the “my” in “my packages” is not the “my” in “my toys”, but the “my” in “my Cat, to whom I am a servant”.

As in the case of a cat, if my package needs a visit to the vet, it's my duty to do so. Other people may point me to the need of such a visit, e.g. by telling me that they have seen the cat leaving unhealty stools, that there is a bug in the package, or even that upstream released a new version a week ago, did you notice?, but the actual putting the package in a cat carrier and bringing it to the vet falls on me.

Whether you're allowed to play with or pet the cat is her decision, not mine, and giving her food or doing changes to the package is usually fine, but please ask first: a few cats have medical issues that require a special diet.

And like cats, sometimes the cat may decide that I'm not doing a good enough job of serving her, and move away to another maintainer; just remember that there is a difference between a lost cat who wants to go back to her old home and a cat that is looking for a new one. When in doubt, packages usually wear a collar with contact informations, trying to ping those is probably a good idea.

This is mostly a summer afternoon idea and will probably require some refinement, but I think that the basic idea can have some value. Comments are appreciated on the federated social networks where this post is being published, via email (valid addresses are on my website and on my GPG key or with a post on a blog that appears on planet debian

¹ how small is small depends a lot on the size of the package, the amount of work it requires, how easy it is to parallelize it and how good are the people involved at communicating, so it would be quite hard to put a precise number here.

² I've moved away from it because the boards I was using could run plain Debian, but I've heard that after the move to git there have been a number of workflow changes (of which I've seen the start) and everything now works much better.

Guido Günther: Debian Fun in July 2016

3 August, 2016 - 14:02
Debian LTS

July marked the fifteenth month I contributed to Debian LTS under the Freexian umbrella. As usual I spent the 8 hours working on these LTS things:

  • Updated QEMU and QEMU-KVM packages to fix CVE-2016-5403, CVE-2016-4439, CVE-2016-4020, CVE-2016-2857 and CVE-2015-5239 resulting in DLA-573-1 and DLA-574-1
  • Updated icedove to 45.2.0 fixing CVE-2016-2818 resulting in DLA-574-1
  • Reviewed and uploaded xen 4.1.6.lts1-1. The update itself was prepared by Bastian Blank.
  • The little bit of remaining time I spent on further work the ruby-active{model,record}-3.2 and ruby-actionpack-3.2 (aka rails) CVEs. Although I have fixes for most of the CVEs already there are still some left where I'm not yet clear if the packages are affected.
  • Added some trivial autopkgtest for qemu-img (#832982) (on non LTS time)
Other Debian stuff
  • Fixed CVE-2016-5008 by uploading libvirt 2.0.0 to sid and 1.2.9-9+deb8u3 to stable-p-u
  • Uploaded libvirt 2.1.0~rc1 to experimental
  • Uploaded libvirt-python 2.0.0 to sid
  • Uploaded libosinfo 0.3.1 to sid preparing for the upcoming upstream package split
  • Uploaded virt-manager 1.4.0 to sid
  • Uploaded network-manager-iodine 1.2.0 to sid
  • Uploaded cups-pk-helper 0.2.6 to sid
  • Triaged apparmor related bugs in libvirt most notably the one affecting hotplugging of disks (#805002) which turned out to be rooted in the kernel not reloading profiles properly.
  • Uploaded git-buildpackage 0.8.0, 0.8.1 to experimental adding additional tarball support to gbp import-orig among other things

Dirk Eddelbuettel: digest 0.6.10

3 August, 2016 - 10:13

A new release, now at version number 0.6.10, of the digest package is now on CRAN. I also just prepared the Debian upload.

This release, just like the previous one, is once again the work mostly of external contributors. Michel Lang added length checks to sha1(); Thierry Onkelinx extended sha1() support and added more tests, Viliam Simko also extended sha1() to more types, and Henrik Bengtsson improved intervals and fixed a bug with file usage.

CRANberries provides the usual summary of changes to the previous version.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

John Goerzen: All Aboard

3 August, 2016 - 08:13

“Aaaaaall Aboard!” *chug* *chug*

And so began a “trip” aboard our hotel train in Indianapolis, conducted by our very own Jacob and Oliver.

Because, well, what could be more fun than spending a few days in the world’s only real Pullman sleeping car, on its original service track, inside a hotel?

We were on a family vacation to Indianapolis, staying in what two railfan boys were sure to enjoy: a hotel actually built into part of the historic Indianapolis Union Station complex. This is the original train track and trainshed. They moved in the Pullman cars, then built the hotel around them. Jacob and Oliver played for hours, acting as conductors and engineers, sending their “train” all across the country to pick up and drop off passengers.


Have you ever seen a kid’s face when you introduce them to something totally new, and they think it is really exciting, but a little scary too?

That was Jacob and Oliver when I introduced them to saganaki (flaming cheese) at a Greek restaurant. The conversation went a little like this:

“Our waitress will bring out some cheese. And she will set it ON FIRE — right by our table!”

“Will it burn the ceiling?”

“No, she’ll be careful.”

“Will it be a HUGE fire?”

“About a medium-sized fire.”

“Then what will happen?”

“She’ll yell ‘OPA!’ and we’ll eat the cheese after the fire goes out.”

“Does it taste good?”

“Oh yes. My favorite!”

It turned out several tables had ordered saganaki that evening, so whenever I saw it coming out, I’d direct their attention to it. Jacob decided that everyone should call it “opa” instead of saganaki because that’s what the waitstaff always said. Pretty soon whenever they’d see something appear in the window from the kitchen, there’d be craning necks and excited jabbering of “maybe that’s our opa!”

And when it finally WAS our “opa”, there were laughs of delight and I suspect they thought that was the best cheese ever.

Giggling Elevators

Fun times were had pressing noses against the glass around the elevator. Laura and I sat on a nearby sofa while Jacob and Oliver sat by the elevators, anxiously waiting for someone to need to go up and down. They point and wave at elevators coming down, and when elevator passengers waved back, Oliver would burst out giggling and run over to Laura and me with excitement.

Some history

We got to see the grand hall of Indianapolis Union Station — what a treat to be able to set foot in this magnificent, historic space, the world’s oldest union station. We even got to see the office where Thomas Edison worked, and as a hotel employee explained, was fired for doing too many experiments on the job.

Water and walkways

Indy has a system of elevated walkways spanning quite a section of downtown. It can be rather complex navigating them, and after our first day there, I offered to let Jacob and Oliver be the leaders. Boy did they take pride in that! They stopped to carefully study maps and signs, and proudly announced “this way” or “turn here” – and were usually correct.

And it was the same in the paddleboat we took down the canal. Both boys wanted to be in charge of steering, and we only scared a few other paddleboaters.


Our visit ended with the grand fireworks show downtown, set off from atop a skyscraper. I had been scouting for places to watch from, and figured that a bridge-walkway would be great. A couple other families had that thought too, and we all watched the 20-minute show in the drizzle.

Loving brothers

By far my favorite photo from the week is this one, of Jacob and Oliver asleep, snuggled up next to each other under the covers. They sure are loving and caring brothers, and had a great time playing together.

Olivier Grégoire: Tenth : SmartInfo is alive!

2 August, 2016 - 23:57

This week, I worked on the gnome client. I wanted to link my right click menu with the call view. That’s was difficult because the right click menu is called by a signal. So, I didn’t have access to the instance of the menu to send my own signal. I tried a lot of things to implement my signal, but it’s just impossible. Then I discovered the GAction. With this technique, I just needed to change the state of my action and connect it with my method in my view and it’s done!

I had now a complete working solution that I tested to find improvements and bugs.
-Change the 2 buttons who launch and stop the system by one unique button.
-Resolve the bug who crash ring when I tried to use SmartInfo in a new call.
-Some other small bugs

See you next week!


Dirk Eddelbuettel: RcppGetconf 0.0.2

2 August, 2016 - 08:28

A first update for the recent RcppGetconf package for reading system configuration --- not unlike getconf from the libc library --- is now out. Almost immediately after I tweeted / blogged asking for help with OS X builds, fellow Rcpp hacker Qiang Kou created a clever pull request allowing for exactly that. So now we cover two POSIX systems that matter most these days --- Linux and OS X --- but as there are more out there, please do try, test and send those pull requests.

We also added a new function getConfig() retrieving a single (given) value complementing the earlier catch-all function getAll(). You can find out more about RcppGetconf from the local RcppGetconf page and the GitHub repo.

Changes in this release are as follows:

Changes in inline version 0.0.2 (2016-08-01)
  • A new function getConfig for single values was added.

  • The struct vars is now defined more portable allowing compilation on OS X (PR #1 by Qiang Kou).

Courtesy of CRANberries, there is a diffstat report. More about the package is at the local RcppGetconf page and the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Charles Plessy: Amazon cloud: refreshing my skills.

2 August, 2016 - 04:38

For a few years I did not attempt any serious task on the Amazon cloud. It took me a bit of time to get back my automatisms and adapt myself to the changes. In particular, the cheapest instances, t2.nano, are only accessible via virtual private clouds (VPC), and it was a bit difficult for me to find how to create a simple one. Perhaps this is because all AWS accounts created after March 18, 2013, automatically have a default VPC, and everybody else who needed their own simple VPC have created it a long time ago already. In the end, this was not complicated at all. This is probably why I could not find a tutorial.

In brief, one needs first to create a VPC. If it is just for spawning an instance from time to time, the IP range does not matter much. Default VPCs are using, so let's do the same.

aws ec2 create-vpc --cidr-block $CIDR_BLOCK

In the command's output, there is the VPC's identifier, that I paste by hand in a variable called VPC. The same pattern will be repeated for each command creating something. One can also find the VPC's identifier with the command aws ec2 describe-vpcs.


Then, create a subnet. Again, no need for complications, in our simple case one can give the full IP range. I cut and paste the returned identifier in the variable SUBNET. In order that the instances receive a public IP address like in default VPCs and like in the usual behaviour of the VPC-less Cloud, one needs to set the attribute MapPublicIpOnLaunch.

aws ec2 create-subnet --vpc-id $VPC --cidr-block $CIDR_BLOCK
aws ec2 modify-subnet-attribute --subnet-id $SUBNET --map-public-ip-on-launch 

Then, create a gateway (paste the identifier in GATEWAY) and attach it to the VPC.

aws ec2 create-internet-gateway
aws ec2 attach-internet-gateway --internet-gateway-id $GATEWAY --vpc-id $VPC

A routing table was created automatically, and one can find its identifier via the command describe-route-tables. Then, create a default route to the gateway.

aws ec2 describe-route-tables
aws ec2 create-route --route-table-id $ROUTETABLE --destination-cidr-block --gateway-id $GATEWAY

Of course, if one does not open the traffic, no instance can be contacted from outside... Here I open port 22 for SSH.

aws ec2 describe-security-groups
aws ec2 authorize-security-group-ingress --group-id $SECURITY_GROUP --protocol tcp --port 22 --cidr

Other novelty, now Amazon distributes some Free tools for the command line, that are more comprehensive than euca2ools.

Next, I will try again the Debian Installer in the Cloud.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้