Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 50 min ago

Jonathan Dowland: Hof.java

11 May, 2017 - 15:45

Daniel Lange: Thunderbird startup hang (hint: Add-Ons)

11 May, 2017 - 15:00

If you see Thunderbird hanging during startup for a minute and then continuing to load fine, you are probably running into an issue similar to what I saw when Debian migrated Icedove back to the "official" Mozilla Thunderbird branding and changed ~/.icedove to ~/.thunderbird in the process (one symlinked to the other).

Looking at the console log (=start Thunderbird from a terminal so you see its messages), I got:

console.log: foxclocks.bootstrap._loadIntoWindow(): got xul-overlay-merged - waiting for overlay-loaded
[.. one minute delay ..]
console.log: foxclocks.bootstrap._windowListener(): got window load chrome://global/content/commonDialog.xul

Stracing confirms it hangs because Thunderbird loops waiting for a FUTEX until that apparently gets kicked by a XUL core timeout.
(Thanks for defensive programming folks!)

So in my case uninstalling the Add-On Foxclocks easily solved the problem.

I assume other Thunderbird Add-Ons may cause the same issue, hence the more generic description above.

Jonathan Dowland: Three things I didn't know about Haskell

11 May, 2017 - 03:26

I've been trying to refresh my Haskell skills and Paul Callaghan recommended I read the paper "A History of Haskell: Being Lazy With Class", which I found (surprisingly?) fascinating. Three facts about Haskell that I didn't know jumped out at me:

  • The notion of Lazy Evaluation in a programming language was invented independently at least three times, but once here at Newcastle University by Peter Henderson (in collaboration with James H. Morris Jr of Xerox PARC). The paper was published as Technical Report 85 and the full text is available (albeit as an 11MB PDF of scanned pages).

  • Despite Haskell's popularity as a platform for formal computing science, the language itself is does not have formally specified semantics (or at least didn't when this paper was written, ten years ago)

  • When I was first exposed to Haskell, the worlds of functional programming and "real world" programming seemed far apart. (This perception was largely due to my own experiences, exposures and biases). In the time since, we've witnessed many FP concepts and ideas gain traction in that "real world", such as Lambda Expessions in Java 8. However, the reverse is also true, and Haskell's heirarchical module names are largely borrowed from Java's heirarchical package naming scheme.

Junichi Uekawa: I tried learning rust but making very little progress.

10 May, 2017 - 17:58
I tried learning rust but making very little progress.

Clint Adams: Four years

10 May, 2017 - 03:45

#706067

Posted on 2017-05-09 Tags: barks

Matthew Garrett: Intel AMT on wireless networks

10 May, 2017 - 03:18
More details about Intel's AMT vulnerablity have been released - it's about the worst case scenario, in that it's a total authentication bypass that appears to exist independent of whether the AMT is being used in Small Business or Enterprise modes (more background in my previous post here). One thing I claimed was that even though this was pretty bad it probably wasn't super bad, since Shodan indicated that there were only a small number of thousand machines on the public internet and accessible via AMT. Most deployments were probably behind corporate firewalls, which meant that it was plausibly a vector for spreading within a company but probably wasn't a likely initial vector.

I've since done some more playing and come to the conclusion that it's rather worse than that. AMT actually supports being accessed over wireless networks. Enabling this is a separate option - if you simply provision AMT it won't be accessible over wireless by default, you need to perform additional configuration (although this is as simple as logging into the web UI and turning on the option). Once enabled, there are two cases:
  1. The system is not running an operating system, or the operating system has not taken control of the wireless hardware. In this case AMT will attempt to join any network that it's been explicitly told about. Note that in default configuration, joining a wireless network from the OS is not sufficient for AMT to know about it - there needs to be explicit synchronisation of the network credentials to AMT. Intel provide a wireless manager that does this, but the stock behaviour in Windows (even after you've installed the AMT support drivers) is not to do this.
  2. The system is running an operating system that has taken control of the wireless hardware. In this state, AMT is no longer able to drive the wireless hardware directly and counts on OS support to pass packets on. Under Linux, Intel's wireless drivers do not appear to implement this feature. Under Windows, they do. This does not require any application level support, and uninstalling LMS will not disable this functionality. This also appears to happen at the driver level, which means it bypasses the Windows firewall.
Case 2 is the scary one. If you have a laptop that supports AMT, and if AMT has been provisioned, and if AMT has had wireless support turned on, and if you're running Windows, then connecting your laptop to a public wireless network means that AMT is accessible to anyone else on that network[1]. If it hasn't received a firmware update, they'll be able to do so without needing any valid credentials.

If you're a corporate IT department, and if you have AMT enabled over wifi, turn it off. Now.

[1] Assuming that the network doesn't block client to client traffic, of course

comments

Benjamin Mako Hill: Surviving an “Eternal September:” How an Online Community Managed a Surge of Newcomers

9 May, 2017 - 23:33

Attracting newcomers is among the most widely studied problems in online community research. However, with all the attention paid to challenge of getting new users, much less research has studied the flip side of that coin: large influxes of newcomers can pose major problems as well!

The most widely known example of problems caused by an influx of newcomers into an online community occurred in Usenet. Every September, new university students connecting to the Internet for the first time would wreak havoc in the Usenet discussion forums. When AOL connected its users to the Usenet in 1994, it disrupted the community for so long that it became widely known as “The September that never ended”.

Our study considered a similar influx in NoSleep—an online community within Reddit where writers share original horror stories and readers comment and vote on them. With strict rules requiring that all members of the community suspend disbelief, NoSleep thrives off the fact that readers experience an immersive storytelling environment. Breaking the rules is as easy as questioning the truth of someone’s story. Socializing newcomers represents a major challenge for NoSleep.

Number of subscribers and moderators on /r/NoSleep over time.

On May 7th, 2014, NoSleep became a “default subreddit”—i.e., every new user to Reddit automatically joined NoSleep. After gradually accumulating roughly 240,000 members from 2010 to 2014, the NoSleep community grew to over 2 million subscribers in a year. That said, NoSleep appeared to largely hold things together. This reflects the major question that motivated our study: How did NoSleep withstand such a massive influx of newcomers without enduring their own Eternal September?

To answer this question, we interviewed a number of NoSleep participants, writers, moderators, and admins. After transcribing, coding, and analyzing the results, we proposed that NoSleep survived because of three inter-connected systems that helped protect the community’s norms and overall immersive environment.

First, there was a strong and organized team of moderators who enforced the rules no matter what. They recruited new moderators knowing the community’s population was going to surge. They utilized a private subreddit for NoSleep’s staff. They were able to socialize and educate new moderators effectively. Although issuing sanctions against community members was often difficult, our interviewees explained that NoSleep’s moderators were deeply committed and largely uncompromising.

That commitment resonates within the second system that protected NoSleep: regulation by normal community members. From our interviews, we found that the participants felt a shared sense of community that motivated them both to socialize newcomers themselves as well as to report inappropriate comments and downvote people who violate the community’s norms.

Finally, we found that the technological systems protected the community as well. For instance, post-throttling was instituted to limit the frequency at which a writer could post their stories. Additionally, Reddit’s “Automoderator”, a programmable AI bot, was used to issue sanctions against obvious norm violators while running in the background. Participants also pointed to the tools available to them—the report feature and voting system in particular—to explain how easy it was for them to report and regulate the community’s disruptors.

This blog post was written with Charlie Kiene. The paper and work this post describes is collaborative work with Charlie Kiene and Andrés Monroy-Hernández. The paper was published in the Proceedings of CHI 2016 and is released as open access so anyone can read the entire paper here. A version of this post was published on the Community Data Science Collective blog.

Jonathan Dowland: The Cursed Hangar, a Doom map

9 May, 2017 - 22:21

I made a Doom map! Well, I actually made it over 10 years ago for Freedoom. A few months ago it was finally replaced, and so I thought I'd release it standalone. I took the opportunity to clean it up a little bit. You could consider it like a "director's cut". It's been a long time since I've done anything like this, and it was fun in some ways to revisit it, but it's not something I want to do again any time soon.

 

It requires a Boom-compatible Doom engine to play: Eternity, ZDoom, PrBoom+ are known to work.

Olivier Berger: Installing a Docker Swarm cluster inside VirtualBox with Docker Machine

9 May, 2017 - 19:02

I’ve documented the process of installing a Docker Swarm cluster inside VirtualBox with Docker Machine. This allows experimenting with Docker Swarm, the simple docker container orchestrator, over VirtualBox.

This allows you to play with orchestration scenarii without having to install docker on real machines.

Also, such an environment may be handy for teaching if you don’t want to install docker on the lab’s host. Installing the docker engine on Linux hosts for unprivileged users requires some care (refer to docs about securing Docker), as the default configuration may allow learners to easily gain root privileges (which may or not be desired).

See more at http://www-public.telecom-sudparis.eu/~berger_o/docker/install-docker-machine-virtualbox.html

Reproducible builds folks: Reproducible Builds: week 106 in Stretch cycle

9 May, 2017 - 15:53

Here's what happened in the Reproducible Builds effort between Sunday April 30 and Saturday May 6 2017:

Past and upcoming events

Between May 5th-7th the Reproducible Builds Hackathon 2017 took place in Hamburg, Germany.

On May 13th Chris Lamb will give a talk on Reproducible Builds at OSCAL 2017 in Tirana, Albania.

Media coverage Toolchain development and fixes Packages reviewed and fixed, and bugs filed

Chris Lamb:

Reviews of unreproducible packages

93 package reviews have been added, 12 have been updated and 98 have been removed in this week, adding to our knowledge about identified issues.

The following issues have been added:

2 issue types have been updated:

The following issues have been removed:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (3)
diffoscope development strip-nondeterminism development

This week's edition was written by Chris Lamb, Holger Levsen and Ximin Luo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Martin Pitt: Cockpit is now just an apt install away

9 May, 2017 - 15:51

Cockpit has now been in Debian unstable and Ubuntu 17.04 and devel, which means it’s now a simple

$ sudo apt install cockpit

away for you to try and use. This metapackage pulls in the most common plugins, which are currently NetworkManager and udisks/storaged. If you want/need, you can also install cockpit-docker (if you grab docker.io from jessie-backports or use Ubuntu) or cockpit-machines to administer VMs through libvirt. Cockpit upstream also has a rather comprehensive Kubernetes/Openstack plugin, but this isn’t currently packaged for Debian/Ubuntu as kubernetes itself is not yet in Debian testing or Ubuntu.

After that, point your browser to https://localhost:9090 (or the host name/IP where you installed it) and off you go.

What is Cockpit?

Think of it as an equivalent of a desktop (like GNOME or KDE) for configuring, maintaining, and interacting with servers. It is a web service that lets you log into your local or a remote (through ssh) machine using normal credentials (PAM user/password or SSH keys) and then starts a normal login session just as gdm, ssh, or the classic VT logins would.

The left side bar is the equivalent of a “task switcher”, and the “applications” (i. e. modules for administering various aspects of your server) are run in parallel.

The main idea of Cockpit is that it should not behave “special” in any way - it does not have any specific configuration files or state keeping and uses the same Operating System APIs and privileges like you would on the command line (such as lvmconfig, the org.freedesktop.UDisks2 D-Bus interface, reading/writing the native config files, and using sudo when necessary). You can simultaneously change stuff in Cockpit and in a shell, and Cockpit will instantly react to changes in the OS, e. g. if you create a new LVM PV or a network device gets added. This makes it fundamentally different to projects like webmin or ebox, which basically own your computer once you use them the first time.

It is an interface for your operating system, which even reflects in the branding: as you see above, this is Debian (or Ubuntu, or Fedora, or wherever you run it on), not “Cockpit”.

Remote machines

In your home or small office you often have more than one machine to maintain. You can install cockpit-bridge and cockpit-system on those for the most basic functionality, configure SSH on them, and then add them on the Dashboard (I add a Fedora 26 machine here) and from then on can switch between them on the top left, and everything works and feels exactly the same, including using the terminal widget:

The Fedora 26 machine has some more Cockpit modules installed, including a lot of “playground” ones, thus you see a lot more menu entries there.

Under the hood

Beneath the fancy Patternfly/React/JavaScript user interface is the Cockpit API and protocol, which particularly fascinates me as a developer as that is what makes Cockpit so generic, reactive, and extensible. This API connects the worlds of the web, which speaks IPs and host names, ports, and JSON, to the “local host only” world of operating systems which speak D-Bus, command line programs, configuration files, and even use fancy techniques like passing file descriptors through Unix sockets. In an ideal world, all Operating System APIs would be remotable by themselves, but they aren’t.

This is where the “cockpit bridge” comes into play. It is a JSON (i. e. ASCII text) stream protocol that can control arbitrarily many “channels” to the target machine for reading, writing, and getting notifications. There are channel types for running programs, making D-Bus calls, reading/writing files, getting notified about file changes, and so on. Of course every channel can also act on a remote machine.

One can play with this protocol directly. E. g. this opens a (local) D-Bus channel named “d1” and gets a property from systemd’s hostnamed:

$ cockpit-bridge --interact=---

{ "command": "open", "channel": "d1", "payload": "dbus-json3", "name": "org.freedesktop.hostname1" }
---
d1
{ "call": [ "/org/freedesktop/hostname1", "org.freedesktop.DBus.Properties", "Get",
          [ "org.freedesktop.hostname1", "StaticHostname" ] ],
  "id": "hostname-prop" }
---

and it will reply with something like

d1
{"reply":[[{"t":"s","v":"donald"}]],"id":"hostname-prop"}
---

(“donald” is my laptop’s name). By adding additional parameters like host and passing credentials these can also be run remotely through logging in via ssh and running cockpit-bridge on the remote host.

Stef Walter explains this in detail in a blog post about Web Access to System APIs. Of course Cockpit plugins (both internal and third-party) don’t directly speak this, but use a nice JavaScript API.

As a simple example how to create your own Cockpit plugin that uses this API you can look at my schroot plugin proof of concept which I hacked together at DevConf.cz in about an hour during the Cockpit workshop. Note that I never before wrote any JavaScript and I didn’t put any effort into design whatsoever, but it does work ☺.

Next steps

Cockpit aims at servers and getting third-party plugins for talking to your favourite part of the system, which means we really want it to be available in Debian testing and stable, and Ubuntu LTS. Our CI runs integration tests on all of these, so each and every change that goes in is certified to work on Debian 8 (jessie) and Ubuntu 16.04 LTS, for example. But I’d like to replace the external PPA/repository on the Install instructions with just “it’s readily available in -backports”!

Unfortunately there’s some procedural blockers there, the Ubuntu backport request suffers from understaffing, and the Debian stable backport is blocked on getting it in to testing first, which in turn is blocked by the freeze. I will soon ask for a freeze exception into testing, after all it’s just about zero risk - it’s a new leaf package in testing.

Have fun playing around with it, and please report bugs!

Feel free to discuss and ask questions on the Google+ post.

Bits from Debian: Bursary applications for DebConf17 are closing in 48 hours!

9 May, 2017 - 03:30

This is a final reminder: if you intend to apply for a DebConf17 bursary and have not yet done so, please proceed as soon as possible.

Bursary applications for DebConf17 will be accepted until May 10th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the wiki.

Note: For DebCamp we only have on-site accommodation available. The option chosen in the registration system will only be for the DebConf period (August 5 to 12).

See you in Montréal!

Daniel Pocock: Visiting Kamailio World (Sold Out) and OSCAL'17

8 May, 2017 - 21:50

This week I'm visiting Kamailio World (8-10 May, Berlin) and OSCAL'17 (13-14 May, Tirana).

Kamailio World

Kamailio World features a range of talks about developing and using SIP and telephony applications and offers many opportunities for SIP developers, WebRTC developers, network operators and users to interact. Wednesday, at midday, there is a Dangerous Demos session where cutting edge innovations will make their first (and potentially last) appearance.

OSCAL'17, Tirana

OSCAL'17 is an event that has grown dramatically in recent years and is expecting hundreds of free software users and developers, including many international guests, to converge on Tirana, Albania this weekend.

On Saturday I'll be giving a workshop about the Debian Hams project and Software Defined Radio. On Sunday I'll give a talk about Free Real-time Communications (RTC) and the alternatives to systems like Skype, Whatsapp, Viber and Facebook.

Lars Wirzenius: Ick2 design discussion

8 May, 2017 - 17:04

Recently, Daniel visited us in Helsinki. In addition to enjoying local food and scenerey, we spent some time together in front of a whiteboard to sketch out designs for Ick2. Ick is my continuous integration system, and it's all Daniel's fault for suggesting the name. Ahem.

I am currently using the first generation of Ick and it is a rigid, cumbersome, and fragile thing. It works well enough that I don't miss Jenkins, but I would like something better. That's the second generation of Ick, or Ick2, and that's what we discussed with Daniel.

Where pretty much everything in Ick1 is hardcoded, everything in Ick2 will be user-configurable. It's my last, best chance to go completely overboard in the second system syndrome manner.

Where Ick1 was written in a feverish two-week hacking session, rushed because my Jenkins install at the time had broken one time too many, we're taking our time with Ick2. Slow and careful is the tune this time around.

Our "minimum viable product" or MVP for Ick2 is defined like this:

Ick2 builds static websites from source in a git repository, using ikiwiki, and published to a web server using rsync. A change to the git repository triggers a new build. It can handle many separate websites, and if given enough worker machines, can build many of them concurrently.

This is a real task, and something we already do with Ick1 at work. It's a reasonable first step for the new program.

Some decisions we made:

  • The Ick2 controller, which decides which projects to build, and what's the next build step at any one time, will be reactive only. It will do nothing except in response to an HTTP API request. This includes things like timed events. An external service will need to poke the controller at the right time.

  • The controller will be accompanied by worker manager processes, which fetch instructions of what to do next, and control actual worker over ssh.

  • Provisioning of the workers is out of scope for the MVP. For the MVP we are OK with a static list of workers. In the future we might make worker registration be a dynamic things, but not for the MVP. (Parts or all of this decision may be changed in the future, but we need to start somewhere.)

  • The MVP publishing will happen by running rsync to a web server. Providing credentials for the workers to do that is the sysadmin's problem, not something the MVP will handle itself.

  • The MVP needs to handle more than one worker, and more than one pipelines, and needs to build things concurrently when there's call for it.

  • The MVP will need to read the pipelines (and their steps and any other info) from YAML config files, and can't have that stuff hardcoded.

  • The MVP API will have no authentication or authorization stuff yet.

The initial pipelines will be basically like this, but expressed in some way by the user:

  1. Clone the source repoistory.
  2. Run ikiwiki --build to build the website.
  3. Run rsync to publish the website on a server.

Assumptions:

  • Every worker can clone from the git server.
  • Every worker has all the build tools.
  • Every worker has rsync and access to every web server.
  • Every pipeline run is clean.

Actions the Ick2 controller API needs to support:

  • List all existing projects.
  • Trigger a project to build.
  • Query what project builds are running.
  • Get build logs for a project: current log (from the running build), and the most recent finished build.

A sketch API:

  • POST /projects/foo/+trigger

    Trigger build of project foo. If the git hasn't changed, the build runs anyway.

  • GET /projects

    List names of all projects.

  • GET /projects/foo

    On second thought, I can't think of anything useful for this to return for the MVP. Scratch.

  • GET /projects/foo/logs/current

    Return entire known build log captured so far for the currently running build.

  • GET /projects/foo/logs/previous

    Return entire build log for latest finished build.

  • GET /work/bar

    Used by worker bar: return next not-yet-finished step to run as a JSON object containing fields "project" (name of project for which to run the step) and "shell" (a shell command to run). The call will return the same JSON object until the worker reports it as having finished.

  • POST /work/bar/snippet

    Used by worker bar to report progress on the currently running step: a JSON object containing fields "stdout" (string with output from the shell command's stdout), "stderr" (ditto but stderr), and "exit_code" (the shell command's exit code, if it's finished, or null).

Sequence:

  • Git server has a hook that calls "GET /projects/foo/+trigger" (or else this is simulated by user).

  • Controller add a build of project foo to queue.

  • Worker manager calls "GET /work/bar", gets a shell command to run, and starts running it on its worker.

  • While worker runs shell command, every second or so, worker manager calls "POST /work/bar/snippet" to report progress including collected output, if any.

  • Controller responds with OK or KILL, and if the latter, worker kills the command it is running. Worker manager continues reporting progress via snippet until shell command is finished (on its own or by having been killed).

  • Controller appends any output reported via .../snippet. When it learns a shell command has finished, it updates its idea of the next step to run.

  • When controller learns a project has finished building, it rotates the current build log to be the previous one.

The next step will probably be to sketch a yarn test suite of the API and implement a rudimentary one.

Mike Gabriel: [Arctica Project] Release of nx-libs (version 3.5.99.7)

8 May, 2017 - 15:40
Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Friday, May 5th 2017, version 3.5.99.7 of nx-libs has been released [1].

Credits

A special thanks goes to Ulrich Sibiller for tracking down a regression bug that caused a tremendously slowed down keyboard input on high latency connections. Thanks for that!

Another thanks goes to the Debian project for indirectly providing us with so many build platforms. We are nearly at the point where nx-libs builds on all architectures supported by the Debian project. (Runtime stability is a completely different issue, we will get to this soon).

Changes between 3.5.99.6 and 3.5.99.7
  • Include Debian patches, re-introducing GNU/Hurd and GNU/kFreeBSD support. Thanks to various porters on #debian-ports and #debian-hurd for feedback (esp. Paul Wise, James Clark, and Samuel Thibault).
  • Revert "Switch from using libNX_X11's deprecated XKeycodeToKeysym() function to using XGetKeyboardMapping()."
  • Mark XKeycodeToKeysym() function as not deprecated in libNX_X11 as using XGetKeyboardMapping() (as suggested by X.org devs) in nxagent is no option for us. XGetKeyboardMapping() simply creates far too many round trips for our taste.
Change Log

Lists of changes (since 3.5.99.6) can be obtained from here.

Known Issues

A list of known issues can be obtained from the nx-libs issue tracker [issues].

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component). The nxagent Xserver can be used from remote sessions (via nxcomp compression library) or as a next Xserver.

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. At the moment, you can obtain nx-libs builds for Ubuntu 16.10 (yakkety) and 17.04 (zenial) as nightly builds.

References

Russ Allbery: Review: Chimes at Midnight

8 May, 2017 - 11:27

Review: Chimes at Midnight, by Seanan McGuire

Series: October Daye #7 Publisher: DAW Copyright: 2013 ISBN: 1-101-63566-5 Format: Kindle Pages: 346

Chimes at Midnight is the seventh book of the October Daye series and builds heavily on the previous books. Toby has gathered quite the group of allies by this point, and events here would casually spoil some of the previous books in the series (particularly One Salt Sea, which you absolutely do not want spoiled). I strongly recommend starting at the beginning, even if the series is getting stronger as it goes along.

This time, rather than being asked for help, the book opens with Toby on a mission. Goblin fruit is becoming increasingly common on the streets of San Francisco, and while she's doing all she can to find and stop the dealers, she's finding dead changelings. Goblin fruit is a pleasant narcotic to purebloods, but to changelings it's instantly and fatally addictive. The growth of the drug trade means disaster for the local changelings, particularly since previous events in the series have broken a prominent local changeling gang. That was for the best, but they were keeping goblin fruit out, and now it's flooding into the power vacuum.

In the sort of idealistic but hopelessly politically naive move that Toby is prone to, she takes her evidence to the highest local authority in faerie: the Queen of the Mists. The queen loathes Toby and the feeling is mutual, but Toby's opinion is that this shouldn't matter: these are her subjects and goblin fruit is widely recognized as a menace. Even if she cares nothing for their lives, a faerie drug being widely sold on the street runs the substantial risk that someone will give it to humans, potentially leading to the discovery of faerie.

Sadly, but predictably, Toby has underestimated the Queen's malevolence. She leaves the court burdened not only with the knowledge that the Queen herself is helping with the distribution of goblin fruit, but also an impending banishment thanks to her reaction. She has three days to get out of the Queen's territory, permanently.

Three days that the Luidaeg suggests she spend talking to people who knew King Gilad, the former and well-respected king of the local territory who died in the 1906 earthquake, apparently leaving the kingdom to the current Queen. Or perhaps not.

As usual, crossing Toby is a very bad idea, and getting Toby involved in politics means that one should start betting heavily against the status quo. Also, as usual, things initially go far too well, and then Toby ends up in serious trouble. (I realize the usefulness of raising the stakes of the story, but I do prefer the books of this series that don't involve Toby spending much of the book ill.) However, there is a vast improvement over previous books in the story: one key relationship (which I'll still avoid spoiling) is finally out of the precarious will-they, won't-they stage and firmly on the page, and it's a relationship that I absolutely love. Watching Toby stomp people who deserve to be stomped makes me happy, but watching Toby let herself be happy and show it makes me even happier.

McGuire also gives us some more long-pending revelations. I probably should have guessed the one about one of Toby's long-time friends and companions much earlier, although at least I did so a few pages before Toby found out. I have some strong suspicions about Toby's own background that were reinforced by this book, and will be curious to see if I'm right. And I'm starting to have guesses about the overall arc of the series, although not firm ones. One of my favorite things in long-running series is the slow revelation of more and more world background, and McGuire does it in just the way I like: lots of underlying complexity, reveals timed for emotional impact but without dragging on things that the characters should obviously be able to figure out, and a whole bunch of layered secrets that continue to provide more mystery even after one layer is removed.

The plot here is typical of the plot of the last couple of novels in the series, which is fine by me since my favorite part of this series is the political intrigue (and Toby realizing that she has far more influence than she thinks). It helps that I thought Arden was great, given how central she is to this story. I liked her realistic reactions to her situation, and I liked her arguments with Toby. I'm dubious how correct Toby actually was, but we've learned by now that arguments from duty are always going to hold sway with her. And I loved Mags and the Library, and hope we'll be seeing more of them in future novels.

The one quibble I'll close with, since the book closed with it, is that I found the ending rather abrupt. There were several things I wanted to see in the aftermath, and the book ended before they could happen. Hopefully that means they'll be the start of the next book (although a bit of poking around makes me think they may be in a novella).

If you've liked the series so far, particularly the couple of books before this one, this is more of what you liked. Recommended.

Followed by The Winter Long.

Rating: 8 out of 10

Dirk Eddelbuettel: RInside 0.2.14

7 May, 2017 - 22:43

A new release 0.2.14 of RInside is now on CRAN and in Debian.

RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

It has been nearly two years since the last release, and a number of nice extensions, build robustifications and fixes had been submitted over this period---see below for more. The only larger and visible extension is both a new example and some corresponding internal changes to allow a readline prompt in an RInside application, should you desire it.

RInside is stressing the CRAN system a little in that it triggers a number of NOTE and WARNING messages. Some of these are par for the course as we get close to R internals not all of which are "officially" in the API. This lead to the submission sitting a little longer than usual in incoming queue. Going forward we may need to find a way to either sanction these access point, whitelist them or, as a last resort, take the package off CRAN. Time will tell.

Changes since the last release were:

Changes in RInside version 0.2.14 (2017-04-28)
  • Interactive mode can use readline REPL (Łukasz Łaniewski-Wołłk in #25, and Dirk in #26)

  • Windows macros checks now uses _WIN32 (Kevin Ushey in #22)

  • The wt example now links with libboost_system

  • The Makevars file is now more robist (Mattias Ellert in #21)

  • A problem with empty environment variable definitions on Windows was addressed (Jeroen Ooms in #17 addressing #16)

  • HAVE_UINTPTR_T is defined only if not already defined

  • Travis CI is now driven via run.sh from our forked r-travis

CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Vasudev Kamath: Tips for fuzzing network programs with AFL

7 May, 2017 - 16:30

Fuzzing is method of producing random malformed inputs for a software and observe the software behavior. If a software crashes then there is a bug and it can have security implications. Fuzzing has gained a lot of interest now a days, especially with automated tools like American Fuzzy Lop (AFL) which can easily help you to fuzz the program and record inputs which causes crash in the software.

American Fuzzy Lop is a file based fuzzer which feeds input to program via standard input. Using it with network program like server's or clients is not possible in the original state. There is a version of AFL with patches to allow it fuzz network programs, but this patch is not merged upstream and I do not know if it ever makes into upstream or not. Also the above repository contains version 1.9 which is older compared to currently released versions.

There is another method for fuzzing network program using AFL with help of LD_PRELOAD tricks. preeny is a project which provides library which when used with LD_PRELOAD can desocket the network program and make it read from stdin.

There is this best tutorial from LoLWare which talks about fuzzing Nginx with preeny and AFL. There is a best AFL workflow by Foxglove Security which gives start to finish details about how to use AFL and its companion tool to do fuzzing. So I'm not going to talk about any steps of fuzzing in this post instead I'm going to list down my observations on changes that needs to be done to get clean fuzzing with AFL and preeny.

  1. desock.so provided by preeny works only with read and write (or rather other system call does not work with stdin) system calls and hence you need to make sure you replace any reference to send, sendto, recv and recvfrom with read and write system calls respectively. Without this change program will not read input provided by AFL on standard input.
  2. If your network program is using forking or threading model make sure to remove all those and make it plain simple program which receives request and sends out response. Basically you are testing the ability of program to handle malformed input so we need very minimum logic to make program do what it is supposed to do when AFL runs it.
  3. If you are using infinite loop like all normal programs replace the infinite loop with below mentioned AFL macro and use afl-clang-fast to compile it. This speeds up the testing as AFL will run the job n times before discarding the current fork and doing a fresh fork. After all fork is costly affair.
while(__AFL_LOOP(1000)) { // Change 1000 with iteration count
             // your logic here
}

With above modification I could fuzz a server program talking binary protocol and another one talking textual protocol. In both case I used Wireshark capture to get the packet extract raw content and feed it as input to AFL.

I was successful in finding crashes which are exploitable in case of textual protocol program than in binary protocol case. In case of binary protocol AFL could not easily find new paths which probably is because of bad inputs I provided. I will continue to do more experiment with binary protocol case and provide my findings as new updates here.

If you have anything more to add do share with me :-). Happy fuzzing!.

Dirk Eddelbuettel: x13binary 1.1.39-1

6 May, 2017 - 21:29

The US Census Bureau released a new build 1.1.39 of their X-13ARIMA-SEATS program, released as binary and source. So Christoph and went to work and updated our x13binary package on CRAN.

The x13binary package takes the pain out of installing X-13ARIMA-SEATS by making it a fully resolved CRAN dependency. For example, if you install the excellent seasonal package by Christoph, it will get pulled in and things just work: Depend on x13binary and on all relevant OSs supported by R, you should have an X-13ARIMA-SEATS binary installed which will be called seamlessly by the higher-level packages. So now the full power of the what is likely the world's most sophisticated deseasonalization and forecasting package is now at your fingertips and the R prompt, just like any other of the 10,500+ CRAN packages.

Not many packaging changes, but we switched our versioning scheme to reflect that our releases are driven by the US Census Bureau releases.

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Vasudev Kamath: Magic: Attach a HTML file to mail and get different on other side

6 May, 2017 - 14:49

It's been a long time I did not write any new blog posts, and finally I got something which is interesting enough for a new post and so here it is.

I actually wanted some bills from my ISP for some work and I could not find mail from the ISP which had bills for some specific months in my mailbox. Problem with my ISP is bills are accessible in their account which can be only accessed from their own network. They do have a mobile app and that does not seem to work especially for the billing section. I used mobile app and selected month for which I did not have bill and clicked Send Mail button. App happily showed message saying it sent the bill to my registered mail address but that was a fancy lie. After trying several time I gave up and decided I will do it once I get back home.

Fast forward few hours, I'm back home from office and then I sat in front of my laptop and connected to ISP site, promptly downloaded the bills, then used notmuch-emacs and fire up a mail composer, attach those HTML file (yeah they produce bill in HTML file :-/) send it to my gmail and forget about it.

Fast forward few more days, I just remember I need those bills. I got hold of mail I sent earlier in gmail inbox and clicked on attachment and when browser opened the attachment I was shocked/surprised . Did you ask why?. See what I saw when I opened attachment.

Well I was confused for moment on what happened, I tried downloading the file and opened it an editor, and I see Chinese characters here also. After reaching home I checked the Sent folder in my laptop where I keep copy of mails sent using notmuch-emacs's notmuch-fcc-dirs variable. I open the file and open the attachement and I see same character as I saw in browser!. I open the original bills I downloaded and it looks fine. To check if I really attached the correct files I again drafted a mail this time to my personal email and sent it. Now I open the file from Sent folder and voila again there is different content inside my attachment!. Same happens when I receive the mail and open the attachment everything inside is Chinese characters!. Totally lost I finally opened the original file using emacs, since I use spacemacs it asked me for downloading of html layer and after which I was surprised because everything inside the editor is in Chinese!. OK finally I've something now so I opened same file at same time in nano from terminal and there it looks fine!. OK that is weird again I read carefully content and saw this line in beginning of file

<?xml version="1.0" encoding="UTF-16" ?>
<html xmlns="http://www.w3.org/1999/xhtml>....

OK that is new XML tag had encoding declared as UTF-16, really?. And just for checking I changed it to UTF-8 and voila file is back to normal in Emacs window!. To confirm this behavior I created a sample file with following content

<?xml version="1.0" encoding="utf-8"?><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /><title> Something weird </title><body>blablablablablalabkabakba</body></html>

Yeah with long line because ISP had similar case and now I opened the same file in nano using terminal and changed encoding="UTF-8" to encoding="UTF-16" and the behavior repeated I see Chinese character in the emacs buffer which has also opened the same file.

Below is the small gif showing this happening in real time, see what happens in emacs buffer when I change the encoding in terminal below. Fun right?.

I made following observation from above experiments.

  1. When I open original file in browser or my sample file with encoding="UTF-16" its normal, no Chinese characters.
  2. When I attach the mail and send it out the attachment some how gets converted into HTML file with Chinese characters and viewing source from browser shows header containing xml declarations and original <html> declarations get ripped off and new tags show up which I've pasted below.
  3. If I download the same file and open in editor only Chinese characters are present and no HTML tags inside it. So definitely the new tags which I saw by viewing source in browser is added by Firefox.
  4. I create similar HTML file and change encoding I can see the characters changing back and forth in emacs web-mode when I change encoding of the file.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
  <head><META http-equiv="Content-Type"  content="text/html; charset=utf-8">

So if I'm understanding this correctly, Emacs due to encoding declaration interprets the actual contents differently. To see how Emacs really interpreted the content I opened the sent mail I had in raw format and saw following header lines for attachment.

Content-Type: text/html; charset=utf-8
Content-Disposition: attachment;
             filename=SOA-102012059675-FEBRUARY-2017.html
Content-Transfer-Encoding: base64

This was followed with base64 encoded data. So does this mean emacs interpreted the content as UTF-16 and encoded the content using UTF-8?. Again I've no clue, so I changed the encoding in both the files to be as UTF-8 and sent the mail by attaching these files again to see what happens. And my guess was right I could get the attachment as is on the receiving side. And inspecting raw mail the attachment headers now different than before.

Content-Type: text/html
Content-Disposition: attachment;
             filename=SOA-102012059675-DECEMBER-2016.html
Content-Transfer-Encoding: quoted-printable

See how Content-Type its different now also see the Content-Transfer-Encoding its now quoted-printable as opposed to base64 earlier. Additionally I can see HTML content below the header. When I opened the attachment from mail I get the actual bill.

As far as I understand base64 encoding is used when the data to be attached is base64. So I guess basically due to wrong encoding declared inside the file Emacs interpreted the content as a binary data and encoded it differently than what it really should be. Phew that was a lot of investigation to understand the magic but it was worth it.

Do let me know your thoughts on this.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้