Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 52 min ago

Joey Hess: extending Scuttlebutt with Annah

11 hours 50 min ago

This post has it all. Flotillas of sailboats, peer-to-peer wikis, games, and de-frogging. But, I need to start by talking about some tech you may not have heard of yet...

  • Scuttlebutt is way for friends to share feeds of content-addressed messages, peer-to-peer. Most Scuttlebutt clients currently look something like facebook, but there are also github clones, chess games, etc. Many private encrypted conversations going on. All entirely decentralized.
    (My scuttlebutt feed can be viewed here)

  • Annah is a purely functional, strongly typed language. Its design allows individual atoms of the language to be put in content-addressed storage, right down to data types. So the value True and a hash of the definition of what True is can both be treated the same by Annah's compiler.
    (Not to be confused with my sister, Anna, or part of the Debian Installer with the same name that I wrote long ago.)

So, how could these be combined together, and what might the result look like?

Well, I could start by posting a Scuttlebutt message that defines what True is. And another Scuttlebutt message defining False. And then, another Scuttlebutt message to define the AND function, which would link to my messages for True and False. Continue this until I've built up enough Annah code to write some almost useful programs.

Annah can't do any IO on its own (though it can model IO similarly to how Haskell does), so for programs to be actually useful, there needs to be Scuttlebutt client support. The way typing works in Annah, a program's type can be expressed as a Scuttlebutt link. So a Scuttlebutt client that wants to run Annah programs of a particular type can pick out programs that link to that type, and will know what type of data the program consumes and produces.

Here are a few ideas of what could be built, with fairly simple client-side support for different types of Annah programs...

  • Shared dashboards. Boats in a flotilla are communicating via Scuttlebutt, and want to share a map of their planned courses. Coders collaborating via Scuttlebutt want to see an overview of the state of their project.

    For this, the Scuttlebutt client needs a way to run a selected Annah program of type Dashboard, and display its output like a Scuttlebutt message, in a dashboard window. The dashboard message gets updated whenever other Scuttlebutt messages come in. The Annah program picks out the messages it's interested in, and generates the dashboard message.

    So, send a message updating your boat's position, and everyone sees it update on the map. Send a message with updated weather forecasts as they're received, and everyone can see the storm developing. Send another message updating a waypoint to avoid the storm, and steady as you go...

    The coders, meanwhile, probably tweak their dashboard's code every day. As they add git-ssb repos, they make the dashboard display an overview of their bugs. They get CI systems hooked in and feeding messages to Scuttlebutt, and make the dashboard go green or red. They make the dashboard A-B test itself to pick the right shade of red. And so on...

    The dashboard program is stored in Scuttlebutt so everyone is on the same page, and the most recent version of it posted by a team member gets used. (Just have the old version of the program notice when there's a newer version, and run that one..)

    (Also could be used in disaster response scenarios, where the data and visualization tools get built up on the fly in response to local needs, and are shared peer-to-peer in areas without internet.)

  • Smart hyperlinks. When a hyperlink in a Scuttlebutt message points to a Annah program, optionally with some Annah data, clicking on it can run the program and display the messages that the program generates.

    This is the most basic way a Scuttlebutt client could support Annah programs, and it could be used for tons of stuff. A few examples:

    • Hiding spoilers. Click on the link and it'll display a spoiler about a book/movie.
    • A link to whatever I was talking about one year ago today. That opens different messages as time goes by. Put it in your Scuttlebutt profile or something. (Requires a way for Annah to get the current date, which it normally has no way of accessing.)
    • Choose your own adventure or twine style games. Click on the link and the program starts the game, displaying links to choose between, and so on.
    • Links to custom views. For example, a link could lead to a combination of messages from several different, related channels. Or could filter messages in some way.
  • Collaborative filtering. Suppose I don't want to see frog-related memes in my Scuttlebutt client. I can write a Annah program that calculates a message's frogginess, and outputs a Filtered Message. It can leave a message unchanged, or filter it out, or perhaps minimize its display. I publish the Annah program on my feed, and tell my Scuttlebutt client to filter all messages through it before displaying them to me.

    I published the program in my Scuttlebutt feed, and so my friends can use it too. They can build other filtering functions for other stuff (such an an excess of orange in photos), and integrate my frog filter into their filter program by simply composing the two.

    If I like their filter, I can switch my client to using it. Or not. Filtering is thus subjective, like Scuttlebutt, and the subjectivity is expressed by picking the filter you want to use, or developing a better one.

  • Wiki pages. Scuttlebutt is built on immutable append-only logs; it doesn't have editable wiki pages. But they can be built on top using Annah.

    A smart link to a wiki page is a reference to the Annah program that renders it. Of course being a wiki, there will be more smart links on the wiki page going to other wiki pages, and so on.

    The wiki page includes a smart link to edit it. The editor needs basic form support in the Scuttlebutt client; when the edited wiki page is posted, the Annah program diffs it against the previous version and generates an Edit which gets posted to the user's feed. Rendering the page is just a matter of finding the Edit messages for it from people who are allowed to edit it, and combining them.

    Anyone can fork a wiki page by posting an Edit to their feed. And can then post a smart link to their fork of the page.

    And anyone can merge other forks into their wiki page (this posts a control message that makes the Annah program implementing the wiki accept those forks' Edit messages). Or grant other users permission to edit the wiki page (another control message). Or grant other users permissions to grant other users permissions.

    There are lots of different ways you might want your wiki to work. No one wiki implementation, but lots of Annah programs. Others can interact with your wiki using the program you picked, or fork it and even switch the program used. Subjectivity again.

  • User-defined board games. The Scuttlebutt client finds Scuttlebutt messages containing Annah programs of type Game, and generates a tab with a list of available games.

    The players of a particular game all experience the same game interface, because the code for it is part of their shared Scuttlebutt message pool, and the code to use gets agreed on at the start of a game.

    To play a game, the Scuttlebutt client runs the Annah program, which generates a description of the current contents of the game board.

    So, for chess, use Annah to define a ChessMove data type, and the Annah program takes the feeds of the two players, looks for messages containing a ChessMove, and builds up a description of the chess board.

    As well as the pieces on the game board, the game board description includes Annah functions that get called when the user moves a game piece. That generates a new ChessMove which gets recorded in the user's Scuttlebutt feed.

    This could support a wide variety of board games. If you don't mind the possibility that your opponent might cheat by peeking at the random seed, even games involving things like random card shuffles and dice rolls could be built. Also there can be games like Core Wars where the gamers themselves write Annah programs to run inside the game.

    Variants of games can be developed by modifying and reusing game programs. For example, timed chess is just the chess program with an added check on move time, and time clock display.

  • Decentralized chat bots. Chat bots are all the rage (or were a few months ago, tech fads move fast), but in a decentralized system like Scuttlebutt, a bot running on a server somewhere would be a ugly point of centralization. Instead, write a Annah program for the bot.

    To launch the bot, publish a message in your own personal Scuttlebutt feed that contains the bot's program, and a nonce.

    The user's Scuttlebutt client takes care of the rest. It looks for messages with bot programs, and runs the bot's program. This generates or updates a Scuttlebutt message feed for the bot.

    The bot's program signs the messages in its feed using a private key that's generated by combining the user's public key, and the bot's nonce. So, the bot has one feed per user it talks to, with deterministic content, which avoids a problem with forking a Scuttlebutt feed.

    The bot-generated messages can be stored in the Scuttlebutt database like any other messages and replicated around. The bot appears as if it were a Scuttlebutt user. But you can have conversations with it while you're offline.

    (The careful reader may have noticed that deeply private messages sent to the bot can be decrypted by anyone! This bot thing is probably a bad idea really, but maybe the bot fad is over anyway. We can only hope. It's important that there be at least one bad idea in this list..)

This kind of extensibility in a peer-to-peer system is exciting! With these new systems, we can consider lessons from the world wide web and replicate some of the good parts, while avoiding the bad. Javascript has been both good and bad for the web. The extensibility is great, and yet it's a neverending security and privacy nightmare, and it ties web pages ever more tightly to programs hidden away on servers. I believe that Annah combined with Scuttlebutt will comprehensively avoid those problems. Shall we build it?

This exploration was sponsored by Jake Vosloo on Patreon.

Michal Čihař: Gammu 1.38.5

18 October, 2017 - 17:00

Today, Gammu 1.38.5 has been released. After long period of bugfix only releases, this comes with several new noteworthy features.

The biggest feature probably is that SMSD can now handle USSD messages as well. Those are usually used for things like checking remaining credit, but it's certainly not limited to this. This feature has been contributed thanks to funding on BountySource.

You can read more information in the release announcement.

Filed under: Debian English Gammu

Steinar H. Gunderson: Introducing Narabu, part 1: Introduction

18 October, 2017 - 15:01

Narabu is a new intraframe video codec, from the Japanese verb narabu (並ぶ), which means to line up or be parallel.

Let me first state straight up that Narabu isn't where I hoped it would be at this stage; the encoder isn't fast enough, and I have to turn my attention to other projects for a while. Nevertheless, I think it is interesting as a research project in its own right, and I don't think it should stop me from trying to write up a small series. :-)

In the spirit of Leslie Lamport, I'll be starting off with describing what problem I was trying to solve, which will hopefully make the design decisions a lot clearer. Subsequent posts will dive into background information and then finally Narabu itself.

I want a codec to send signals between different instances of Nageru, my free software video mixer, and also longer-term between other software, such as recording or playout. The reason is pretty obvious for any sort of complex configuration; if you are doing e.g. both a stream mix and a bigscreen mix, they will naturally want to use many of the same sources, and sharing them over a single GigE connection might be easier than getting SDI repeaters/splitters, especially when you have a lot of them. (Also, in some cases, you might want to share synthetic signals, such as graphics, that never existed on SDI in the first place.)

This naturally leads to the following demands:

  • Intraframe-only; every frame must be compressed independently. (This isn't strictly needed for all use cases, but is much more flexible, and common in any kind of broadcast.)
  • Need to handle 4:2:2 color, since that's what most capture sources give out, and we want to transmit the raw signals as much as possible. Fairly flexible in input resolution (divisible by 16 is okay, limited to only a given set of resolutions is not).
  • 720p60 video in less than one CPU core (ideally much less); the CPU can already pretty be busy with other work, like x264 encoding of the finished stream, and sharing four more inputs at the same time is pretty common. What matters is mostly a single encode+decode cycle, so fast decode doesn't help if the encoder is too slow.
  • Target bitrates around 100-150 Mbit/sec, at similar quality to MJPEG (ie. 45 dB PSNR for most content). Multiple signals should fit into a normal GigE link at the same time, although getting it to work over 802.11 isn't a big priority.
  • Both encoder and decoder robust to corrupted or malicious data; a dropped frame is fine, a crash is not.
  • Does not depend on uncommon or expensive hardware, or GPUs from a specific manufacturer.
  • GPLv3-compatible implementation. I already link to GPLv3 software, so I don't have a choice here; I cannot link to something non-free (and no antics with dlopen(), please).

There's a bunch of intraframe formats around. The most obvious thing to do would be to use Intel Quick Sync to produce H.264 (intraframe H.264 blows basically everything else out of the sky in terms of PSNR, and QSV hardly uses any power at all), but sadly, that's limited to 4:2:0. I thought about encoding the three color planes as three different monochrome streams, but monochrome is not supported either.

Then there's a host of software solutions. x264 can do 4:2:2, but even on ultrafast, it gobbles up an entire core or more at 720p60 at the target bitrates (mostly in entropy coding). FFmpeg has implementations of all kinds of other codecs, like DNxHD, CineForm, MJPEG and so on, but they all use much more CPU for encoding than the target. NDI would seem to fit the bill exactly, but fails the licensing check, and also isn't robust to corrupted or malicious data. (That, and their claims about video quality are dramatically overblown for any kinds of real video data I've tried.)

So, sadly, this leaves only really one choice, namely rolling my own. I quickly figured I couldn't beat the world on CPU video codec speed, and didn't really want to spend my life optimizing AVX2 DCTs anyway, so again, the GPU will come to our rescue in the form of compute shaders. (There are some other GPU codecs out there, but all that I've found depend on CUDA, so they are NVIDIA-only, which I'm not prepared to commit to.) Of course, the GPU is quite busy in Nageru, but if one can make an efficient enough codec that one stream can work at only 5% or so of the GPU (meaning 1200 fps or so), it wouldn't really make a dent. (As a spoiler, the current Narabu encoder isn't there for 720p60 on my GTX 950, but the decoder is.)

In the next post, we'll look a bit at the GPU programming model, and what it means for how our codec needs to look like on the design level.

Norbert Preining: Kobo firmware 4.6.9995 mega update (KSM, nickel patch, ssh, fonts)

18 October, 2017 - 07:54

It has been ages that I haven’t updated the MegaUpdate package for Kobo. Now that a new and seemingly rather bug-free and quick firmware release (4.6.9995) has been released, I finally took the time to update the whole package to the latest releases of all the included items. The update includes all my favorite patches and features: Kobo Start Menu, koreader, coolreader, pbchess, ssh access, custom dictionaries, and some side-loaded fonts.

So what are all these items:

  • firmware (thread): the basic software of the device, shipped by Kobo company
  • Metazoa firmware patches (thread): fix some layout options and functionalities, see below for details.
  • Kobo Start Menu (V08, update 5b thread): a menu that pops up before the reading software (nickel) starts, which allows to start alternative readers (like koreader) etc.
  • KOreader (koreader-nightly-20171004, thread): an alternative document reader that supports epub, azw, pdf, djvu and many more
  • pbchess and CoolReader (2017.10.14, thread): a chess program and another alternative reader, bundled together with several other games
  • kobohack (web site): I only use the ssh server
  • ssh access (old post: makes a full computer from your device by allowing you to log into it via ssh
  • custom dictionaries (thread): this fix updates dictionaries from the folder customdicts to the Kobo dictionary folder. For creating your own Japanese-English dictionary, see this blog entry
  • side-loaded fonts: GentiumBasic and GentiumBookBasic, Verdana, DroidSerif, and Charter-eInk

Install procedure Download Mark6 – Kobo GloHD

firmware: Kobo 4.6.9995 for GloHD

Mega update: Kobo-4.6.9995-combined/Mark6/KoboRoot.tgz

Mark5 – Aura

firmware: Kobo 4.6.9995 for Aura

Mega update: Kobo-4.6.9995-combined/Mark5/KoboRoot.tgz

Mark4 – Kobo Glo, Aura HD

firmware: Kobo 4.6.9995 for Glo and AuraHD

Mega update: Kobo-4.6.9995-combined/Mark4/KoboRoot.tgz

Latest firmware

Warning: Sideloading or crossloading the incorrect firmware can break/brick your device. The link below is for Kobo GloHD ONLY.

The first step is to update the Kobo to the latest firmware. This can easily be done by just getting the latest firmware from the links above and unpacking the zip file into the .kobo directory on your device. Eject and enjoy the updating procedure.

Mega update

Get the combined KoboRoot.tgz for your device from the links above and put it into the .kobo directory, then eject and enjoy the updating procedure again.

After this the device should reboot and you will be kicked into KSM, from where after some time of waiting Nickel will be started. If you consider the fonts too small, select Configure, then the General, and add item, then select kobomenuFontsize=55 and save.

Remarks to some of the items included

The full list of included things is above, here are only some notes about what specific I have done.

  • Metazoa firmware patches

    Included patches from the Metazoa firmware patches:

    Custom left & right margins
    Fix three KePub fullScreenReading bugs
    Change dicthtml strings to micthtml
    Default ePub monospace font (Courier)
    Custom reading footer style
    Dictionary pop-up frame size increase
    Increase The Cover Size In Library
    Increasing The View Details Container
    New home screen increasing cover size
    Reading stats/Author name cut when the series is showing bug fix 
    New home screen subtitle custom font
    Custom font to Collection and Authors names
    

    If you need/want different patches, you need to do the patching yourself.

  • kobohack-h

    Kobohack (latest version 20150110) originally provided updated libraries and optimizations, but unfortunately it is now completely outdated and using it is not recommended for the library part. I only include the ssh server (dropbear) so that connections to the Kobo via ssh.

  • ssh fixes

    See the detailed instructions here, the necessary files are already included in the mega upload. It updates the /etc/inittab to run also /etc/init.d/rcS2, and this one again starts the inetd server and run user supplied commands in /mnt/onboard/run.sh which is where your documents are.

  • Custom dictionaries

    The necessary directories and scripts are already included in the above combined KoboRoot.tgz, so nothing to be done but dropping updated, fixed, changed dictionaries into your Kobo root into the directory customdict. After this you need to reboot to get the actual dictionaries updated. See this thread for more information. The adaptions and script mentioned in this post are included in the mega update.

WARNINGS

If this is the first time you install this patch, you to fix the password for root and disable telnet. This is an important step, here are the steps you have to take (taken from this old post):

  1. Turn on Wifi on the Kobo and find IP address
    Go to Settings – Connect and after this is done, go to Settings – Device Information where you will see something like
    IP Address: 192.168.1.NN
    (numbers change!)
  2. telnet into your device
    telnet 192.168.1.NN
    it will ask you the user name, enter “root” (without the quotes) and no password
  3. (ON THE GLO) change home directory of root
    edit /etc/passwd with vi and change the entry for root by changing the 6th field from: “/” to “/root” (without the quotes). After this procedure the line should look like
    root::0:0:root:/root:/bin/sh
    don’t forget to save the file
  4. (ON THE GLO) create ssh keys for dropbear
    [root@(none) ~]# mkdir /etc/dropbear
    [root@(none) ~]# cd /etc/dropbear
    [root@(none) ~]# dropbearkey -t dss -f dropbear_dss_host_key
    [root@(none) ~]# dropbearkey -t rsa -f dropbear_rsa_host_key
  5. (ON YOUR PERSONAL COMPUTER) check that you can log in with ssh
    ssh root@192.168.1.NN
    You should get dropped into your device again
  6. (ON THE GLO) log out of the telnet session (the first one you did)
    [root@(none) ~]# exit
  7. (ON THE GLO) in your ssh session, change the password of root
    [root@(none) ~]# passwd
    you will have to enter the new password two times. Remember it well, you will not be easily able to recover it without opening your device.
  8. (ON THE GLO) disable telnet login
    edit the file /etc/inetd.conf.local on the GLO (using vi) and remove the telnet line (the line starting with 23).
  9. restart your device

The combined KoboRoot.tgz is provided without warranty. If you need to reset your device, don’t blame me!

Sune Vuorela: KDE still makes Qt

18 October, 2017 - 03:21

A couple of years ago, I made a blog post, KDE makes Qt, with data about which percentage of Qt contributions came from people starting in KDE. Basically, how many Qt contributions are made by people who used KDE as a “gateway” drug into it.

I have now updated the graphs with data until the end of September 2017:

Many of these changes are made by people not directly as a result of their KDE work, but as a result of their paid work. But this doesn’t change the fact that KDE is an important project for attracting contributors to Qt, and a very good place to find experienced Qt developers.

Reproducible builds folks: Reproducible Builds: Weekly report #129

18 October, 2017 - 02:29

Here's what happened in the Reproducible Builds effort between Sunday October 8 and Saturday October 14 2017:

Upcoming events
  • On Saturday 21st October, Holger Levsen will present at All Systems Go! in Berlin, Germany on reproducible builds.

  • On Tuesday 24th October, Chris Lamb will present at All Things Open 2017 in Raleigh, NC, USA on reproducible builds.

  • On Wednesday 25th October, Holger Levsen will present at the Open Source Summit Europe in Prague, Czech Republic on reproducible builds.

  • From October 31st - November 2nd we will be holding the 3rd Reproducible Builds summit in Berlin. If you are working in the field of reproducible builds, you should totally be there. Please contact us if you have any questions! Quoting from the public invitation mail:

    These dates are inclusive, ie. the summit will be 3 full days from "9 to 5".
    Best arrive on Monday October 30th and leave on the evening of Thursday, 3rd
    at the earliest.
    
    
    Meeting content
    ===============
    
    The exact content of the meeting is going to be shaped by the
    participants, but here are the main goals:
    
     - Update & exchange about the status of reproducible builds in various
       projects.
     - Establish spaces for more strategic and long-term thinking than is possible
       in virtual channels.
     - Improve collaboration both between and inside projects.
     - Expand the scope and reach of reproducible builds to more projects.
     - Brainstorming / Designing several things, eg:
      - designing tools enabling end-users to get the most benefits from
        reproducible builds.
      - design of back-ends needed for that.
     - Work together and hack on solutions.
    
    There will be a huge variety of topics to be discussed. To give a few
    examples:
    - continuing design and development work on .buildinfo infrastructure
    - build-path issues everywhere
    - future directions for diffoscope, reprotest & strip-nondeterminism
    - reproducing signed artifacts such as RPMs
    - discussing formats and tools we can share
    - sharing proposals for standards and documentation helpful to spreading the
      reproducible effort
    - and many many more.
    
    Please think about what you want discuss, brainstorm & learn about at this
    meeting!
    
    
    Schedule
    ========
    
    Preliminary schedule for the three days:
    
    9:00 Welcome and breakfast
    9:30 Meeting starts
    12:30 Lunch
    17:00 End of the official schedule
    
    Gunner and Beatrice from Aspiration will help running the meeting. We will
    collect your input in subsequent emails to make the best of everyone's time.
    Feel free to start thinking about what you want to achieve there. We will also
    adjust topics as the meeting goes.
    
    Please note that we are very likely to spend large parts of the meeting away
    from laptops and closer to post-it notes. So make sure you've answered any
    critical emails *before* Tuesday morning! :)
    
Reproducible work in other projects

Pierre Pronchery reported that that he has built the foundations for doing more reproducibility work in NetBSD.

Packages fixed

Upstream bugs and patches:

  • Bernhard M. Wiedemann:
    • qutim used RANDOM which is unpredictable and unreproducible.
    • dpdk used locale-dependent sort.

Reproducibility non-maintainer uploads in Debian:

QA fixes in Debian:

Reviews of unreproducible packages

6 package reviews have been added, 30 have been updated and 37 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (40)
  • Eric Valette (1)
  • Markus Koschany (1)
diffoscope development
  • Ximin Luo:
    • Containers: diff the metadata of containers in one central location in the code, so that deep-diff works between all combinations of different container types. This lets us finally close #797759.
    • Tests: add a complete set of cases to test all pairs of container types.
  • Chris Lamb:
    • Temporarily skip the test for ps2ascii(1) in ghostscript > 9.21 which now outputs text in a slightly different format.
    • UI wording improvements.
reprotest development

Version 0.7.3 was uploaded to unstable by Ximin Luo. It included contributions already covered by posts of the previous weeks, as well as new ones:

  • Ximin Luo:
    • Add a --env-build option for testing builds under different sets of environment variables. This is meant to help the discussion over at #876055 about how we should deal with different types of environment variables in a stricter definition of reproducibility.
    • UI and logging tweaks and improvements.
    • Simplify the _shell_ast module and merge it into shell_syn.
Misc.

This week's edition was written by Ximin Luo, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Jonathan Dowland: Electric Dreams

17 October, 2017 - 18:33

No spoilers, for those who have yet to watch it...

Channel 4 have been broadcasting a new 10-part series called Electric Dreams, based on some of the short fiction of Philip K Dick. The series was commissioned after Channel 4 lost Black Mirror to Netflix, perhaps to try and find something tonally similar. Electric Dreams is executive-produced by Brian Cranston, who also stars in one of the episodes yet to broadcast.

I've read all of PKD's short fiction1 but it was a long time ago so I have mostly forgotten the stories upon which the series is based. I've quite enjoyed going back and re-reading them after watching the corresponding episodes to see what changes they've made. In some cases the changes are subtle or complementary, in other cases they've whittled the original story right out and installed a new one inside the shell. A companion compilation has been published with just the relevant short stories in it, and from what I've seen browsing it in a book shop it also contains short introductions which might be worth a read.

Things started strong with The Hood Maker, which my wife also enjoyed, although she was disappointed to realise we wouldn't be revisiting those characters in the future. The world-building was strong enough that it seemed like a waste for a single episode.

My favourite episode of those broadcast so far was The Commuter, starring Timothy Spall. The changes made were complementary and immensely expanded the emotional range of the story. In some ways, a key aspect of the original story was completely inverted, which I found quite funny: my original take on Dick's story was Dick implying a particular outcome was horrific, whereas it becomes desirable in the TV episode.

Episode 4, Crazy Diamond

One of the stories most hollowed-out was Sales Pitch which was the basis for Tony Grisoni’s episode Crazy Diamond, starring Steve Buscemi and Sidse Babett Knudsen. Buscemi was good but Knudsen totally stole every frame she was in. Fans of the cancelled Channel 4 show Utopia should enjoy this one: both were directed by Marc Munden and the directing, photography and colour balance really recall it.

The last episode broadcast was Real Life directed by Ronald D Moore of Battlestar Galactica reboot fame and starring Anna Paquin. Like Sales Pitch it bears very little resemblance to the original story. It played around with similar ideas explored in a lot of Sci-Fi movies and TV shows but left me a little flat; I didn't think it contributed much that I hadn't seen before. I was disappointed that there was a relatively conclusive ending. There was a subversive humour in the Dick short that was completely lost in the retelling. The world design seemed pretty generic.

I'm looking forward to Autofac, which is one of the shorts I can remember particularly enjoying.

  1. as collected in the 5 volumes of The Collected Stories of Philip K Dick, although I don't doubt there are some stragglers that were missed out when that series was compiled. ↩

Russ Allbery: Bundle haul

17 October, 2017 - 12:38

Confession time: I started making these posts (eons ago) because a close friend did as well, and I enjoyed reading them. But the main reason why I continue is because the primary way I have to keep track of the books I've bought and avoid duplicates is, well, grep on these posts.

I should come up with a non-bullshit way of doing this, but time to do more elegant things is in short supply, and, well, it's my blog. So I'm boring all of you who read this in various places with my internal bookkeeping. I do try to at least add a bit of commentary.

This one will be more tedious than most since it includes five separate Humble Bundles, which increases the volume a lot. (I just realized I'd forgotten to record those purchases from the past several months.)

First, the individual books I bought directly:

Ilona Andrews — Sweep in Peace (sff)
Ilona Andrews — One Fell Sweep (sff)
Steven Brust — Vallista (sff)
Nicky Drayden — The Prey of Gods (sff)
Meg Elison — The Book of the Unnamed Midwife (sff)
Pat Green — Night Moves (nonfiction)
Ann Leckie — Provenance (sff)
Seanan McGuire — Once Broken Faith (sff)
Seanan McGuire — The Brightest Fell (sff)
K. Arsenault Rivera — The Tiger's Daughter (sff)
Matthew Walker — Why We Sleep (nonfiction)

Some new books by favorite authors, a few new releases I heard good things about, and two (Night Moves and Why We Sleep) from references in on-line articles that impressed me.

The books from security bundles (this is mostly work reading, assuming I'll get to any of it), including a blockchain bundle:

Wil Allsop — Unauthorised Access (nonfiction)
Ross Anderson — Security Engineering (nonfiction)
Chris Anley, et al. — The Shellcoder's Handbook (nonfiction)
Conrad Barsky & Chris Wilmer — Bitcoin for the Befuddled (nonfiction)
Imran Bashir — Mastering Blockchain (nonfiction)
Richard Bejtlich — The Practice of Network Security (nonfiction)
Kariappa Bheemaiah — The Blockchain Alternative (nonfiction)
Violet Blue — Smart Girl's Guide to Privacy (nonfiction)
Richard Caetano — Learning Bitcoin (nonfiction)
Nick Cano — Game Hacking (nonfiction)
Bruce Dang, et al. — Practical Reverse Engineering (nonfiction)
Chris Dannen — Introducing Ethereum and Solidity (nonfiction)
Daniel Drescher — Blockchain Basics (nonfiction)
Chris Eagle — The IDA Pro Book, 2nd Edition (nonfiction)
Nikolay Elenkov — Android Security Internals (nonfiction)
Jon Erickson — Hacking, 2nd Edition (nonfiction)
Pedro Franco — Understanding Bitcoin (nonfiction)
Christopher Hadnagy — Social Engineering (nonfiction)
Peter N.M. Hansteen — The Book of PF (nonfiction)
Brian Kelly — The Bitcoin Big Bang (nonfiction)
David Kennedy, et al. — Metasploit (nonfiction)
Manul Laphroaig (ed.) — PoC || GTFO (nonfiction)
Michael Hale Ligh, et al. — The Art of Memory Forensics (nonfiction)
Michael Hale Ligh, et al. — Malware Analyst's Cookbook (nonfiction)
Michael W. Lucas — Absolute OpenBSD, 2nd Edition (nonfiction)
Bruce Nikkel — Practical Forensic Imaging (nonfiction)
Sean-Philip Oriyano — CEHv9 (nonfiction)
Kevin D. Mitnick — The Art of Deception (nonfiction)
Narayan Prusty — Building Blockchain Projects (nonfiction)
Prypto — Bitcoin for Dummies (nonfiction)
Chris Sanders — Practical Packet Analysis, 3rd Edition (nonfiction)
Bruce Schneier — Applied Cryptography (nonfiction)
Adam Shostack — Threat Modeling (nonfiction)
Craig Smith — The Car Hacker's Handbook (nonfiction)
Dafydd Stuttard & Marcus Pinto — The Web Application Hacker's Handbook (nonfiction)
Albert Szmigielski — Bitcoin Essentials (nonfiction)
David Thiel — iOS Application Security (nonfiction)
Georgia Weidman — Penetration Testing (nonfiction)

Finally, the two SF bundles:

Buzz Aldrin & John Barnes — Encounter with Tiber (sff)
Poul Anderson — Orion Shall Rise (sff)
Greg Bear — The Forge of God (sff)
Octavia E. Butler — Dawn (sff)
William C. Dietz — Steelheart (sff)
J.L. Doty — A Choice of Treasons (sff)
Harlan Ellison — The City on the Edge of Forever (sff)
Toh Enjoe — Self-Reference ENGINE (sff)
David Feintuch — Midshipman's Hope (sff)
Alan Dean Foster — Icerigger (sff)
Alan Dean Foster — Mission to Moulokin (sff)
Alan Dean Foster — The Deluge Drivers (sff)
Taiyo Fujii — Orbital Cloud (sff)
Hideo Furukawa — Belka, Why Don't You Bark? (sff)
Haikasoru (ed.) — Saiensu Fikushon 2016 (sff anthology)
Joe Haldeman — All My Sins Remembered (sff)
Jyouji Hayashi — The Ouroboros Wave (sff)
Sergei Lukyanenko — The Genome (sff)
Chohei Kambayashi — Good Luck, Yukikaze (sff)
Chohei Kambayashi — Yukikaze (sff)
Sakyo Komatsu — Virus (sff)
Miyuki Miyabe — The Book of Heroes (sff)
Kazuki Sakuraba — Red Girls (sff)
Robert Silverberg — Across a Billion Years (sff)
Allen Steele — Orbital Decay (sff)
Bruce Sterling — Schismatrix Plus (sff)
Michael Swanwick — Vacuum Flowers (sff)
Yoshiki Tanaka — Legend of the Galactic Heroes, Volume 1: Dawn (sff)
Yoshiki Tanaka — Legend of the Galactic Heroes, Volume 2: Ambition (sff)
Yoshiki Tanaka — Legend of the Galactic Heroes, Volume 3: Endurance (sff)
Tow Ubukata — Mardock Scramble (sff)
Sayuri Ueda — The Cage of Zeus (sff)
Sean Williams & Shane Dix — Echoes of Earth (sff)
Hiroshi Yamamoto — MM9 (sff)
Timothy Zahn — Blackcollar (sff)

Phew. Okay, all caught up, and hopefully won't have to dump something like this again in the near future. Also, more books than I have any actual time to read, but what else is new.

Norbert Preining: Japanese TeX User Meeting 2017

17 October, 2017 - 12:22

Last saturday the Japanese TeX User Meeting took place in Fujisawa, Kanagawa. For those who have been at the TUG 2013 in Tokyo you will remember that the Japanese TeX community is quite big and vibrant. On Saturday about 50 users and developers gathered for a set of talks on a variety of topics.

The first talk was by Keiichiro Shikano (鹿野 桂一郎) on using Markup text to generate (La)TeX and HTML. He presented a variety of markup formats, including his own tool xml2tex.

The second talk was my Masamichi Hosoda (細田 真道) on reducing the size of PDF files using PDFmark extraction. As a contributor to many projects including Texinfo and LilyPond, Masamichi Hosoda tells us horror stories about multiple font embedding in the manual of LilyPond, the permanent need for adaption to newer Ghostscript versions, and the very recent development in Ghostscript prohibiting the merge of font definitions in PDF files.

Next up was Yusuke Terada (寺田 侑祐) on grading exams using TeX. Working through hundreds and hundreds of exams and do the grading is something many of us are used to and I think nobody really enjoys it. Yusuke Terada has combined various tools, including scans, pdf merging using pdfpages, to generate gradable PDF which were then checked on an iPad. On the way he did hit some limits in dvipdfmx on the number of images, but this was obviously only a small bump on the road. Now if that could be automatized as a nice application, it would be a big hit I guess!

The forth talk was by Satoshi Yamashita (山下 哲) on the preparation of slides using KETpic. KETpic is a long running project by Setsuo Takato (高遠節夫) for the generation of graphics, in particular using Cinderella. KETpic and KETcindy integrates with lots of algebraic and statistical programs (R, Maxima, SciLab, …) and has a long history of development. Currently there are activities to incorporate it into TeX Live.

The fifth talk was by Takuto Asakura (朝倉 卓人) on programming TeX using expl3, the main building block of the LaTeX3 project and already adopted by many TeX developers. Takuto Asakura came to fame on this years TUG/BachoTeX 2017 when he won the W. J. Martin Prize for his presentation Implementing bioinformatics algorithms in TeX. I think we can expect great new developments from Takuto!

The last talk was by myself on fmtutil and updmap, two of the main management programs in any TeX installation, presenting the changes introduced over the last year, including the most recent release of TeX Live. Details have been posted on my blog, and a lengthy article in TUGboat 38:2, 2017 is available on this topic, too.

After the conference about half of the participants joined a social dinner in a nearby Izakaya, followed by a after-dinner beer tasting at a local craft beer place. Thanks to Tatsuyoshi Hamada for the organization.

As usual, the Japanese TeX User Meetings are a great opportunity to discuss new features and make new friends. I am always grateful to be part of this very nice community! I am looking forward to the next year’s meeting.

François Marier: Checking Your Passwords Against the Have I Been Pwned List

17 October, 2017 - 12:10

Two months ago, Troy Hunt, the security professional behind Have I been pwned?, released an incredibly comprehensive password list in the hope that it would allow web developers to steer their users away from passwords that have been compromised in past breaches.

While the list released by HIBP is hashed, the plaintext passwords are out there and one should assume that password crackers have access to them. So if you use a password on that list, you can be fairly confident that it's very easy to guess or crack your password.

I wanted to check my active passwords against that list to check whether or not any of them are compromised and should be changed immediately. This meant that I needed to download the list and do these lookups locally since it's not a good idea to send your current passwords to this third-party service.

I put my tool up on Launchpad / PyPI and you are more than welcome to give it a go. Install Postgres and Psycopg2 and then follow the README instructions to setup your database.

Gustavo Noronha Silva: Who knew we still had low-hanging fruits?

17 October, 2017 - 01:23

Earlier this month I had the pleasure of attending the Web Engines Hackfest, hosted by Igalia at their offices in A Coruña, and also sponsored by my employer, Collabora, Google and Mozilla. It has grown a lot and we had many new people this year.

Fun fact: I am one of the 3 or 4 people who have attended all of the editions of the hackfest since its inception in 2009, when it was called WebKitGTK+ hackfest \o/

It was a great get together where I met many friends and made some new ones. Had plenty of discussions, mainly with Antonio Gomes and Google’s Robert Kroeger, about the way forward for Chromium on Wayland.

We had the opportunity of explaining how we at Collabora cooperated with igalians to implemented and optimise a Wayland nested compositor for WebKit2 to share buffers between processes in an efficient way even on broken drivers. Most of the discussions and some of the work that led to this was done in previous hackfests, by the way!

The idea seems to have been mostly welcomed, the only concern being that Wayland’s interfaces would need to be tested for security (fuzzed). So we may end up going that same route with Chromium for allowing process separation between the UI and GPU (being renamed Viz, currently) processes.

On another note, and going back to the title of the post, at Collabora we have recently adopted Mattermost to replace our internal IRC server. Many Collaborans have decided to use Mattermost through an Epiphany Web Application or through a simple Python application that just shows a GTK+ window wrapping a WebKitGTK+ WebView.

Some people noticed that when the connection was lost Mattermose would take a very long time to notice and reconnect – its web sockets were taking a long, long time to timeout, according to our colleague Andrew Shadura.

I did some quick searching on the codebase and noticed WebCore has a NetworkStateNotifier interface that it uses to get notified when connection changes. That was not implemented for WebKitGTK+, so it was likely what caused stuff to linger when a connection hiccup happened. Given we have GNetworkMonitor, implementation of the missing interfaces required only 3 lines of actual code (plus the necessary boilerplate)!

I was surprised to still find such as low hanging fruit in WebKitGTK+, so I decided to look for more. Turns out WebCore also has a notifier for low power situations, which was implemented only by the iOS port, and causes the engine to throttle some timers and avoid some expensive checks it would do in normal situations. This required a few more lines to implement using upower-glib, but not that many either!

That was the fun I had during the hackfest in terms of coding. Mostly I had fun just lurking in break out sessions discussing the past, present and future of tech such as WebRTC, Servo, Rust, WebKit, Chromium, WebVR, and more. I also beat a few challengers in Street Fighter 2, as usual.

I’d like to say thanks to Collabora, Igalia, Google, and Mozilla for sponsoring and attending the hackfest. Thanks to Igalia for hosting and to Collabora for sponsoring my attendance along with two other collaborans. It was a great hackfest and I’m looking forward to the next one! See you in 2018 =)

Yves-Alexis Perez: OpenPGP smartcard transition (part 1.5)

16 October, 2017 - 22:32

Following the news about the ROCA vulnerability (weak key generation in Infineon-based smartcards, more info here and here) I can confirm that the Almex smartcard I mentionned on my last post (which are Infineon based) are indeed vulnerable.

I've contacted Almex to have more details, but if you were interested in buying that smartcard, you might want to refrain for now.

It does *not* affect keys generated off-card and later injected (the process I use myself).

 

Iain R. Learmonth: No more no surprises

16 October, 2017 - 15:00

Debian has generally always had, as a rule, “sane defaults” and “no surprises”. This was completely shattered for me when Vim decided to hijack the mouse from my terminal and break all copy/paste functionality. This has occured since the release of Debian 9.

I expect for my terminal to behave consistently, and this is broken every time I log in to a Debian 9 system where I have not configured Vim to disable this functionality. I also see I’m not alone in this frustration.

To fix this, in your .vimrc:

if !has("gui_running")
  set mouse=
endif

(This will check to see if your using GVim or similar, where it would be reasonable to expect the mouse to work.)

This is perhaps not aggresive enough though. I never want to have console applications trying to use the mouse. I’ve configured rxvt to do things like open URLs in Firefox, etc. that I always want to work, and I always want my local clipboard to be used so I can copy/paste between remote machines.

I’ve found a small patch that would appear to disable mouse reporting for rxvt, but unfortunately I cannot do this through an Xresources option. If someone is looking for something to do for Hacktoberfest, I’d love to see this be an option for rxvt without re-compiling:

diff --git a/src/rxvt.h b/src/rxvt.h
index 5c7cf66..2751ba3 100644
--- a/src/rxvt.h
+++ b/src/rxvt.h
@@ -646,7 +646,7 @@ enum {
 #define PrivMode_ExtMouseRight  (1UL<<24) // xterm pseudo-utf-8, but works in non-utf-8-locales
 #define PrivMode_BlinkingCursor (1UL<<25)
 
-#define PrivMode_mouse_report   (PrivMode_MouseX10|PrivMode_MouseX11|PrivMode_MouseBtnEvent|PrivMode_MouseAnyEvent)
+#define PrivMode_mouse_report   0 /* (PrivMode_MouseX10|PrivMode_MouseX11|PrivMode_MouseBtnEvent|PrivMode_MouseAnyEvent) */
 
 #ifdef ALLOW_132_MODE
 # define PrivMode_Default (PrivMode_Autowrap|PrivMode_ShiftKeys|PrivMode_VisibleCursor|PrivMode_132OK)

Russ Allbery: Free software log (September 2017)

16 October, 2017 - 11:47

I said that I was going to start writing these regularly, so I'm going to stick to it, even when the results are rather underwhelming. One of the goals is to make the time for more free software work, and I do better at doing things that I record.

The only piece of free software work for September was that I made rra-c-util compile cleanly with the Clang static analyzer. This was fairly tedious work that mostly involved unconfusing the compiler or converting (semi-intentional) crashes into explicit asserts, but it unblocks using the Clang static analyzer as part of the automated test suite of my other projects that are downstream of rra-c-util.

One of the semantic changes I made was that the vector utilities in rra-c-util (which maintain a resizable array of strings) now always allocate room for at least one string pointer. This wastes a small amount of memory for empty vectors that are never used, but ensures that the strings struct member is always valid. This isn't, strictly speaking, a correctness fix, since all the checks were correct, but after some thought, I decided that humans might have the same problem that the static analyzer had. It's a lot easier to reason about a field that's never NULL. Similarly, the replacement function for a missing reallocarray now does an allocation of size 1 if given a size of 0, just to avoid edge case behavior. (I'm sure the behavior of a realloc with size 0 is defined somewhere in the C standard, but if I have to look it up, I'd rather not make a human reason about it.)

I started on, but didn't finish, making rra-c-util compile without Clang warnings (at least for a chosen set of warnings). By far the hardest problem here are the Clang warnings for comparisons between unsigned and signed integers. In theory, I like this warning, since it's the cause of a lot of very obscure bugs. In practice, gah does C ever do this all over the place, and it's incredibly painful to avoid. (One of the biggest offenders is write, which returns a ssize_t that you almost always want to compare against a size_t.) I did a bunch of mechanical work, but I now have a lot of bits of code like:

     if (status < 0)
         return;
    written = (size_t) status;
    if (written < avail)
        buffer->left += written;

which is ugly and unsatisfying. And I also have a ton of casts, such as with:

    buffer_resize(buffer, (size_t) st.st_size + used);

since st.st_size is an off_t, which may be signed. This is all deeply unsatisfying and ugly, and I think it makes the code moderately harder to read, but I do think the warning will potentially catch bugs and even security issues.

I'm still torn. Maybe I can find some nice macros or programming styles to avoid the worst of this problem. It definitely requires more thought, rather than just committing this huge mechanical change with lots of ugly code.

Mostly, this kind of nonsense makes me want to stop working on C code and go finish learning Rust....

Anyway, apart from work, the biggest thing I managed to do last month that was vaguely related to free software was upgrading my personal servers to stretch (finally). That mostly went okay; only a few things made it unnecessarily exciting.

The first was that one of my systems had a very tiny / partition that was too small to hold the downloaded debs for the upgrade, so I had to resize it (VM disk, partition, and file system), and that was a bit exciting because it has an old-style DOS partition table that isn't aligned (hmmm, which is probably why disk I/O is so slow on those VMs), so I had to use the obsolete fdisk -c=dos mode because I wasn't up for replacing the partition right then.

The second was that my first try at an upgrade died with a segfault during the libc6 postinst and then every executable segfaulted. A mild panic and a rescue disk later (and thirty minutes and a lot of swearing), I tracked the problem down to libc6-xen. Nothing in the dependency structure between jessie and stretch forces libc6-xen to be upgraded in lockstep or removed, but it's earlier in the search path. So ld.so gets upgraded, and then finds the old libc6 from the libc6-xen package, and the mismatch causes immediate segfaults. A chroot dpkg --purge from the rescue disk solved the problem as soon as I knew what was going on, but that was a stressful half-hour.

The third problem was something I should have known was going to be an issue: an old Perl program that does some internal stuff for one of the services I ran had a defined @array test that has been warning for eons and that I never fixed. That became a full syntax error with the most recent Perl, and then I fixed it incorrectly the first time and had a bunch of trouble tracking down what I'd broken. All sorted out now, and everything is happily running stretch. (ejabberd, which other folks had mentioned was a problem, went completely smoothly, although I suspect I now have too many of the plugin packages installed and should do a purging.)

Norbert Preining: Fixing vim in Debian

16 October, 2017 - 08:18

I was wondering for quite some time why on my server vim behaves so stupid with respect to the mouse: Jumping around, copy and paste wasn’t possible the usual way. All this despite having

  set mouse=

in my /etc/vim/vimrc.local. Finally I found out why, thanks to bug #864074 and fixed it.

The whole mess comes from the fact that, when there is no ~/.vimrc, vim loads defaults.vim after vimrc.local and thus overwriting several settings put in there.

There is a comment (I didn’t see, though) in /etc/vim/vimrc explaining this:

" Vim will load $VIMRUNTIME/defaults.vim if the user does not have a vimrc.
" This happens after /etc/vim/vimrc(.local) are loaded, so it will override
" any settings in these files.
" If you don't want that to happen, uncomment the below line to prevent
" defaults.vim from being loaded.
" let g:skip_defaults_vim = 1

I agree that this is a good way to setup vim on a normal installation of Vim, but the Debian package could do better. The problem is laid out clearly in the bug report: If there is no ~/.vimrc, settings in /etc/vim/vimrc.local are overwritten.

This is as counterintuitive as it can be in Debian – and I don’t know any other package that does it in a similar way.

Since the settings in defaults.vim are quite reasonable, I want to have them, but only fix a few of the items I disagree with, like the mouse. At the end what I did is the following in my /etc/vim/vimrc.local:

if filereadable("/usr/share/vim/vim80/defaults.vim")
  source /usr/share/vim/vim80/defaults.vim
endif
" now set the line that the defaults file is not reloaded afterwards!
let g:skip_defaults_vim = 1

" turn of mouse
set mouse=
" other override settings go here

There is probably a better way to get a generic load statement that does not depend on the Vim version, but for now I am fine with that.

Iain R. Learmonth: Free Software Efforts (2017W41)

16 October, 2017 - 05:00

Here’s my weekly report for week 41 of 2017. In this week I have explored some Java 8 features, looked at automatic updates in a few Linux distributions and decided that actually I don’t need swap anymore.

Debian

The issue that was preventing the migration of the Tasktools Packaging Team’s mailing list from Alioth to Savannah has now been resolved.

Ana’s chkservice package that I sponsored last week has been ACCEPTED into unstable and since MIGRATED to testing.

Tor Project

I have produced a patch for the Tor Project website to update links to the Onionoo documentation now this has moved (#23802 ). I’ve updated the Debian and Ubuntu relay configuration instructions to use systemctl instead of service where appropriate (#23048 ).

When a Tor relay is less than 2 years old, an alert will now appear on Atlas to link to the new relay lifecycle blog post (#23767 ). This should hopefully help new relay operators understand why their relay is not immediately fully loaded but instead it takes some time to ramp up.

I have gone through the tickets for Tor Cloud and did not find any tickets that contain any important information that would be useful to someone reviving the project. I have closed out these tickets and the Tor Cloud component no longer has any non-closed tickets (#7763, #8544, #8768, #9064, #9751, #10282, #10637, #11153, #11502, #13391, #14035, #14036, #14073, #15821 ).

I’ve continued to work on turning the Atlas application into an integrated part of Tor Metrics (#23518 ) and you can see some progress here.

Finally, I’ve continued hacking on a Twitter bot to tweet factoids about the public Tor network and you can now enjoy some JavaDoc documentation if you’d like to learn a little about its internals. I am still waiting for a git repository to be created (#23799 ) but will be publishing the sources shortly after that ticket is actioned.

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

I’d like to thank Digital Ocean for providing me with futher credit for their platform to support my open source work.

I do not find it likely that I’ll be travelling to Cambridge for the miniDebConf as the train alone would be around £350 and hotel accomodation a further £600 (to include both me and Ana).

Norbert Preining: TeX Live Manager: JSON output

15 October, 2017 - 08:32

With the development of TLCockpit continuing, I found the need for and easy exchange format between the TeX Live Manager tlmgr and frontend programs like TLCockpit. Thus, I have implemented JSON output for the tlmgr info command.

While the format is not 100% stable – I might change some thing – I consider it pretty settled. The output of tlmgr info --data json is a JSON array with JSON objects for each package requested (default is to list all).

[ TLPackageObj, TLPackageObj, ... ]

The structure of the JSON object TLPackageObj reflects the internal Perl hash. Guaranteed to be present keys are name (String) and avilable (Boolean). In case the package is available, there are the following further keys sorted by their type:

  • String type: name, shortdesc, longdesc, category, catalogue, containerchecksum, srccontainerchecksum, doccontainerchecksum
  • Number type: revision, runsize, docsize, srcsize, containersize, srccontainersize, doccontainersize
  • Boolean type: available, installed, relocated
  • Array type: runfiles (Strings), docfiles (Strings), srcfiles (Strings), executes (Strings), depends (Strings), postactions (Strings)
  • Object type:
    • binfiles: keys are architecture names, values are arrays of strings (list of binfiles)
    • binsize: keys are architecture names, values or numbers
    • docfiledata: keys are docfile names, values are objects with optional keys details and lang
    • cataloguedata: optional keys aare topics, version, license, ctan, date, values are all strings

A rather long example showing the output for the package latex, formatted with json_pp and having the list of files and the long description shortened:

[
   {
      "installed" : true,
      "doccontainerchecksum" : "5bdfea6b85c431a0af2abc8f8df160b297ad73f6a324ca88df990f01f24611c9ae80d2f6d12c7b3767308fbe3de3fca3d11664b923ea4080fb13fd056a1d0c3d",
      "docfiles" : [
         "texmf-dist/doc/latex/base/README.txt",
         ....
         "texmf-dist/doc/latex/base/webcomp.pdf"
      ],
      "containersize" : 163892,
      "depends" : [
         "luatex",
         "pdftex",
         "latexconfig",
         "latex-fonts"
      ],
      "runsize" : 414,
      "relocated" : false,
      "doccontainersize" : 12812184,
      "srcsize" : 752,
      "revision" : 43813,
      "srcfiles" : [
         "texmf-dist/source/latex/base/alltt.dtx",
         ....
         "texmf-dist/source/latex/base/utf8ienc.dtx"
      ],
      "category" : "Package",
      "cataloguedata" : {
         "version" : "2017/01/01 PL1",
         "topics" : "format",
         "license" : "lppl1.3",
         "date" : "2017-01-25 23:33:57 +0100"
      },
      "srccontainerchecksum" : "1d145b567cf48d6ee71582a1f329fe5cf002d6259269a71d2e4a69e6e6bd65abeb92461d31d7137f3803503534282bc0c5546e5d2d1aa2604e896e607c53b041",
      "postactions" : [],
      "binsize" : {},
      "longdesc" : "LaTeX is a widely-used macro package for TeX, [...]",
      "srccontainersize" : 516036,
      "containerchecksum" : "af0ac85f89b7620eb7699c8bca6348f8913352c473af1056b7a90f28567d3f3e21d60be1f44e056107766b1dce8d87d367e7f8a82f777d565a2d4597feb24558",
      "executes" : [],
      "binfiles" : {},
      "name" : "latex",
      "catalogue" : null,
      "docsize" : 3799,
      "available" : true,
      "runfiles" : [
         "texmf-dist/makeindex/latex/gglo.ist",
         ...
         "texmf-dist/tex/latex/base/x2enc.dfu"
      ],
      "shortdesc" : "A TeX macro package that defines LaTeX"
   }
]

What is currently not available via tlmgr info and thus also not via the JSON output is access to virtual TeX Live databases with several member databases (multiple repositories). I am thinking about how to incorporate this information.

These changes are currently available in the tlcritical repository, but will enter proper TeX Live repositories soon.

Using this JSON output I will rewrite the current TLCockpit tlmgr interface to display more complete information.

Lior Kaplan: Debian Installer git repository

15 October, 2017 - 05:15

While dealing with d-i’s translation last month in FOSScamp, I was kinda surprised it’s still on SVN. While reviewing PO files from others, I couldn’t select specific parts to commit.

Debian does have a git server, and many DDs (Debian Developers) use it for their Debian work, but it’s not as public as I wish it to be. Meaning I lack the pull / merge request abilities as well as the review process.

Recently I got a reminder that the D-I’s Hebrew translation needs some love. I asked my local community for help. Receiving a PO file by mail, reminded me of the SVN annoyance. So this time I decided to convert it to git and ask people to send me pull requests. Another benefit would be making the process more transparent as others could see these PRs (and hopefully comment if needed).

For this experiment, I opened a repository on GitHub at https://github.com/kaplanlior/debian-installer I know they aren’t open source as GitLab, but they are a popular choice which is a good start for my experiment. If and when it succeeds, we can discuss the platform.

 


Filed under: Debian GNU/Linux

Petter Reinholdtsen: A one-way wall on the border?

15 October, 2017 - 03:10

I find it fascinating how many of the people being locked inside the proposed border wall between USA and Mexico support the idea. The proposal to keep Mexicans out reminds me of the propaganda twist from the East Germany government calling the wall the “Antifascist Bulwark” after erecting the Berlin Wall, claiming that the wall was erected to keep enemies from creeping into East Germany, while it was obvious to the people locked inside it that it was erected to keep the people from escaping.

Do the people in USA supporting this wall really believe it is a one way wall, only keeping people on the outside from getting in, while not keeping people in the inside from getting out?

Norbert Preining: ScalaFX: ListView with CellFactory

14 October, 2017 - 12:29

I had a bit hard time to get ScalaFX to display a list of items in a scrollable space, and each item can be clicked. I use this in TLCockpit to display the list of documentation files in a TeX Live package, and open it directly from the application. Unfortunately there is not a huge amount of examples using ScalaFX out there in the web, so it took me a bit. My first try was using a VBox with various Labels in there, but this is not scrollable.

In other areas I have used TreeTableView, so in this case using ListView should be fine. What I finally came up is the following code:

import scalafx.application.JFXApp
import scalafx.application.JFXApp.PrimaryStage
import scalafx.collections.ObservableBuffer
import scalafx.geometry.Orientation
import scalafx.scene.control.{ListCell, ListView}
import scalafx.scene.input.MouseEvent
import scalafx.scene.{Cursor, Scene}
import scalafx.scene.paint.Color
import scalafx.Includes._

object ApplicationMain extends JFXApp {

  val SomeStrings: Seq[String] = Seq("Hello", "World", "Enjoy")

  stage = new PrimaryStage {
    title = "ListViewExample"
    scene = new Scene {
      root = {
        new ListView[String] {
          orientation = Orientation.Vertical
          cellFactory = {
            p => {
              val cell = new ListCell[String]
              cell.textFill = Color.Blue
              cell.cursor = Cursor.Hand
              cell.item.onChange { (_, _, str) => cell.text = str }
              cell.onMouseClicked = { me: MouseEvent => println("Do something with " + cell.text.value) }
              cell
            }
          }
          items = ObservableBuffer(SomeStrings)
        }
      }
    }
  }
}

Some comments to the code, at least as far I understand it:

  • line 9: Importing scalafx.Includes._ seems to simplify some things, in particular the event handler routines can be written more straight forward.
  • line 20: Many more properties can be set here, for example the preferred height and max height, both of which I am using.
  • line 23: In my case a ListCell was enough for my needs (changing color, cursor, and allowing for mouse clicks), but if one needs something more complicated here it is bst to create an arbitrary object and asign it to the graphic field.
  • line 26: The essential part to actually fill the cells is the routine cell.item.onChange, which takes three arguments of which the last is the new value. It is used to update the cell text.
  • line 31: Last but not least one needs to assign some Observable to the items, in this case I use ObservableBuffer around the lost of strings.

Once managed, it doesn’t look so complicated, but took me some time.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้