Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 41 min 57 sec ago

Russ Allbery: California state election

20 May, 2018 - 12:07

Hm, I haven't done one of these in a while. Well, time to alienate future employers and make awkward mistakes in public that I have to explain if I ever run for office! (Spoiler: I'm highly unlikely to ever run for office.)

This is only of direct interest to California residents. To everyone else, RIP your feed reader, and I'm sorry for the length. (My hand-rolled blog software doesn't do cut tags.) I'll spare you all the drill-down into the Bay Area regional offices. (Apparently we elect our coroner, which makes no sense to me.)

Propositions

I'm not explaining these because this is already much too long; those who aren't in California and want to follow along can see the voter guide.

Proposition 68: YES. Still a good time to borrow money, and what we're borrowing money for here seems pretty reasonable. State finances are in reasonable shape; we have the largest debt of any state for the obvious reason that we have the most people and the most money.

Proposition 69: YES. My instinct is to vote no because I have a general objection to putting restrictions on how the state manages its budget. I don't like dividing tax money into locked pools for the same reason that I stopped partitioning hard drives. That said, this includes public transit in the spending pool from gasoline taxes (good), the opposition is incoherent, and there are wide-ranging endorsements. That pushed me to yes on the grounds that maybe all these people understand something about budget allocations that I don't.

Proposition 70: NO. This is some sort of compromise with Republicans because they don't like what cap-and-trade money is being spent on (like high-speed rail) and want a say. If I wanted them to have a say, I'd vote for them. There's a reason why they have to resort to backroom tricks to try to get leverage over laws in this state, and it's not because they have good ideas.

Proposition 71: YES. Entirely reasonable change to say that propositions only go into effect after the election results are final. (There was a real proposition where this almost caused a ton of confusion, and prompted this amendment.)

Proposition 72: YES. I'm grumbling about this because I think we should get rid of all this special-case bullshit in property taxes and just readjust them regularly. Unfortunately, in our current property tax regime, you have to add more exemptions like this because otherwise the property tax hit (that would otherwise not be incurred) is so large that it kills the market for these improvements. Rainwater capture is to the public benefit in multiple ways, so I'll hold my nose and vote for another special exception.

Federal Offices

US Senator: Kevin de León. I'll vote for Feinstein in the general, and she's way up on de León in the polls, but there's no risk in voting for the more progressive candidate here since there's no chance Feinstein won't get the most votes in the primary. De León is a more solidly progressive candidate than Feinstein. I'd love to see a general election between the two of them.

State Offices

I'm omitting all the unopposed ones, and all the ones where there's only one Democrat running in the primary. (I'm not going to vote for any Republican except for one exception noted below, and third parties in the US are unbelievably dysfunctional and not ready to govern.) For those outside the state, California has a jungle primary where the top two vote-getters regardless of party go to the general election, so this is more partisan and more important than other state primaries.

Governor: Delaine Eastin. One always has to ask, in our bullshit voting system, whether one has to vote tactically instead of for the best candidate. But, looking at polling, I think there's no chance Gavin Newsom (the second-best candidate and the front-runner) won't advance to the general election, so I get to vote for the candidate I actually want to win, even though she's probably not going to. Eastin is by far the most progressive candidate running who actually has the experience required to be governor. (Spoiler: Newsom is going to win, and I'll definitely vote for him in the general against Villaraigosa.)

Lieutenant Governor: Eleni Kounalakis. She and Bleich are the strongest candidates. I don't see a ton of separation between them, but Kounalakis's endorsements are a bit stronger for me. She's also the one candidate who has a specific statement about what she plans to do with the lieutenant governor role of oversight over the university system, which is almost it's only actual power. (This political office is stupid and we should abolish it.)

Secretary of State: Alex Padilla. I agree more with Ruben Major's platform (100% paper ballots is the correct security position), but he's an oddball outsider and I don't think he can accomplish as much. Padilla has an excellent track record as the incumbant and is doing basically the right things, just less dramatically.

Treasurer: Fiona Ma. I like Vivek Viswanathan and support his platform, but Fiona Ma has a lot more political expertise and I think will be more effective. I look forward to voting for Viswanathan for something else someday.

Attorney General: Dave Jones. Xavier Becerra hasn't been doing a bad job fighting off bad federal policy, but that seems to be all that he's interested in, and he's playing partisan games with the office. He has an air of amateurishness and political hackery. Dave Jones holds the same positions in a more effective way, is more professional, and has done a good job as Insurance Commissioner.

Insurance Commissioner: Steve Poizner. I'm going to vote for the (former) Republican here. Poizner expressed some really bullshit views on immigration when he ran for governor (which he's apologized for). I wouldn't support him for a more political office. But he was an excellent insurance commissioner (see, for instance, the response to Blue Cross's rate increase request). I'm closer to Ricardo Lara politically, but in his statements to the press he comes across as a clown: self-driving car insurance problems, cannabis insurance, climate change insurance, and a bunch of other nonsense that makes me think he doesn't understand the job. The other democrat, Mahmood, seems like less of a partisan hack, but he's a virtual unknown. If this were an important partisan office, I'd hold my nose and vote for Lara, but the job of insurance commissioner is more to be an auditor and negotiator, and Poizner was really good at it.

Superintendent of Public Instruction: Tony Thurmond. The other front-runner is Marshall Tuck, who is a charter school advocate. I hate charter schools with the passion of a burning sun.

Local Measures

Regional Measure 3: YES. Even more hyper-local than the rest of this post, but mentioning it because it was a narrow call. Bridge tolls are regressive, and I'm not a big fan of raising them as opposed to, say, increasing property taxes (yes please) or income taxes. That said, taxing cars to pay for (largely) public transit is the direction the money should flow. It was thinly balanced for me, but the thrust of the projects won out over the distaste at the regressive tax.

Russ Allbery: Free software log (April 2018)

20 May, 2018 - 06:15

This is rather late since I got distracted by various other things including, ironically, releasing a bunch of software. This is for April, so doesn't include the releases from this month.

The main release I worked on was remctl 3.14, which fixed a security bug introduced in 3.12 with the sudo configuration option. This has since been replaced by 3.15, which has more thorough maintainer testing infrastructure to hopefully prevent this from happening again.

I also did the final steps of the release process for INN 2.6.2, although as usual Julien ÉLIE did all of the hard work.

On the Debian side, I uploaded a new rssh package for the migration to GitLab (salsa.debian.org). I have more work to do on that front, but haven't yet had the time. I've been prioritizing some of my own packages over doing more general Debian work.

Finally, I looked at my Perl modules on CPANTS (the CPAN testing service) and made note of a few things I need to fix, plus filed a couple of bugs for display issues (one of which turned out to be my fault and fixed in Git). I also did a bit of research on the badges that people in the Rust community use in their documentation and started adding support to DocKnot, some of which made it into the subsequent release I did this month.

Dirk Eddelbuettel: RcppGSL 0.3.5

20 May, 2018 - 03:11

A maintenance update of RcppGSL just brought version 0.3.5 to CRAN, a mere twelve days after the RcppGSL 0.3.4. release. Just like yesterday's upload of inline 0.3.15 it was prompted by a CRAN request to update the per-package manual page; see the inline post for details.

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

No user-facing new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.5 (2018-05-19)
  • Update package manual page using references to DESCRIPTION file [CRAN request].

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Martín Ferrari: MiniDebConf Hamburg - Friday/Saturday

20 May, 2018 - 00:40
MiniDebCamp Hamburg - Friday 18/5, Saturday 19/5

Friday and Saturday have been very productive days, I love events where there is time to hack!

I had more chats about contributors.d.o with Ganneff and Formorer, and if all goes according to plan, soon salsa will start streaming commit information to contributors and populate information about different teams: not only about normal packaging repos, but also about websites, tools, native packages, etc.

Note that the latter require special configuration, and the same goes if you want to have separate stats for your team (like for the Go team or the Perl team). So if you want to offer proper attribution to members of your team, please get in touch!

I spent loads of time working on Prometheus packages, and finally today (after almost a year) I uploaded a new version of prometheus-alertmanager to experimental. I decided to just drop all the web interface, as packaging all the Elm framework would take me months of work. If anybody feels like writing a basic HTML/JS interface, I would be happy to include it in the package!

While doing that, I found bugs in the CI pipeline for Go packages in Salsa. Solving these will hopefully make the automatic testing more reliable, as API breakage is sadly a big problem in the Go ecosystem.

I am loving the venue here. Apart from hosting some companies and associations, there is an art gallery which currently has a photo exhibition called Echo park; there were parties happening last night, and tonight apparently there will be more. This place is amazing!

Comment

Thorsten Glaser: Progress report from the Movim packaging sprint at MiniDebconf

19 May, 2018 - 23:45

After arriving, I’ve started collecting knowledge first. I reviewed upstream’s composer.json file and Wiki page about dependencies and, after it quickly became apparent that we need much more information (e.g. which versions are in sid, what the package names are, and, most importantly, recursive dependencies), a Wiki page of our own grew. Then I made a hunt for information about how to package stuff that uses PHP Composer upstream, and found the, ahem, wonderfully abundant, structured, plentiful and clear documentation from the Debian PHP/PEAR Packaging team. (Some time and reverse-engineering later I figured out that we just ignore composer and read its control file in pkg-php-tools converting dependency information to Debian package relationships. Much time later I also figured out it mangles package names in a specific way and had to rename one of the packages I created in the meantime… thankfully before having uploaded it.) Quickly, the Wiki page grew listing the package names we’re supposed to use. I created a package which I could use as template for all others later.

The upstream Movim developer arrived as well — we have quite an amount of upstream developers of various projects attending MiniDebConf, to the joy of the attendees actually directly involved in Debian, and this makes things much easier, as he immediately started removing dependencies (to make our job easier) and fixing bugs and helping us understand how some of those dependencies work. (I also contributed code upstream that replaces some Unicode codepoints or sequences thereof, such as 3⃣ or ‼ or 👱🏻‍♀️, with <img…/> tags pointing to the SVG images shipped with Movim, with a description (generated from their Unicode names) in the alt attribute.)

Now, Saturday, all dependencies are packaged so far, although we’re still waiting for maintainer feedback for those two we’d need to NMU (or have them upload or us take the packages over); most are in NEW of course, but that’s no problem. Now we can tackle packaging Movim itself — I guess we’ll see whether those other packages actually work then ☺ We also had a chance to fix bugs in other packages, like guacamole and musescore.

In the meantime we’ve also had the chance to socialise, discuss, meet, etc. other Debian Developers and associates and enjoy the wonderful food and superb coffee of the “Cantina” at the venue; let me hereby express heartfelt thanks to the MiniDebConf organisation for this good location pick!

Mike Hommey: Announcing git-cinnabar 0.5.0 beta 3

19 May, 2018 - 12:26

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 2?
  • Fixed incompatibilities with Mercurial >= 4.4.
  • Miscellaneous metadata format changes.
  • Move more operations to the helper, hopefully making things faster.
  • Updated git to 2.17.0 for the helper.
  • Properly handle clones with bundles when the repository doesn’t contain anything newer than the bundle.
  • Fixed tag cache, which could lead to missing tags.

Dirk Eddelbuettel: inline 0.3.15

19 May, 2018 - 08:04

A maintenance release of the inline package arrived on CRAN today. inline facilitates writing code in-line in simple string expressions or short files. The package is mature and in maintenance mode: Rcpp used it greatly for several years but then moved on to Rcpp Attributes so we have a much limited need for extensions to inline. But a number of other package have a hard dependence on it, so we do of course look after it as part of the open source social contract (which is a name I just made up, but you get the idea...)

This release was triggered by a (as usual very reasonable) CRAN request to update the per-package manual page which had become stale. We now use Rd macros, you can see the diff for just that file at GitHub; I also include it below. My pkgKitten package-creation helper uses the same scheme, I wholeheartedly recommend it -- as the diff shows, it makes things a lot simpler.

Some other changes reflect both two user-contributed pull request, as well as standard minor package update issues. See below for a detailed list of changes extracted from the NEWS file.

Changes in inline version 0.3.15 (2018-05-18)
  • Correct requireNamespace() call thanks (Alexander Grueneberg in #5).

  • Small simplification to .travis.yml; also switch to https.

  • Use seq_along instead of seq(along=...) (Watal M. Iwasaki) in #6).

  • Update package manual page using references to DESCRIPTION file [CRAN request].

  • Minor packaging updates.

Courtesy of CRANberries, there is a comparison to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Joey Hess: fridge 0.1

19 May, 2018 - 06:26

Imagine something really cool, like a fridge connected to a powerwall, powered entirely by solar panels. What could be cooler than that?

How about a fridge powered entirely by solar panels without the powerwall? Zero battery use, and yet it still preserves your food.

That's much cooler, because batteries, even hyped ones like the powerwall, are expensive and innefficient and have limited cycles. Solar panels are cheap and efficient now. With enough solar panels that the fridge has power to cool down most days (even cloudy days), and a smart enough control system, the fridge itself becomes the battery -- a cold battery.

I'm live coding my fridge, with that goal in mind. You can follow along in this design thread on secure scuttlebutt, and my git commits, and you can watch real-time data from my fridge.

Over the past two days, which were not especially sunny, my 1 kilowatt of solar panels has managed to cool the fridge down close to standard fridge temperatures. The temperature remains steady overnight thanks to added thermal mass in the fridge. My food seems safe in it, despite it being powered off for 14 hours each night.

(Numbers in this graph are running higher than the actual temps of food in the fridge, for reasons explained in the scuttlebutt thread.)

Of course, the longterm viability of a fridge that never draws from a battery is TBD; I'll know within a year if it works for me.

I've written about the coding side of this project before, in my haskell controlled offgrid fridge. The reactive-banana-automation library is working well in this application. My AIMS inverter control board and easy-peasy-devicetree-squeezy were other groundwork for this project.

Andrej Shadura: Goodbye Octopress, hello Pelican

19 May, 2018 - 03:05

Hi from MiniDebConf in Hamburg!

As you may have noticed, I don’t update this blog often. One of the reasons why this was happening was that until now it was incredibly difficult to write posts. The software I used, Octopress (based on Jekyll) was based on Ruby, and it required quite specific versions of its dependencies. I had the workspace deployed on one of my old laptops, but when I attempted to reproduce it on the laptop I currently use, I failed to. Some dependencies could not be installed, others failed, and my Ruby skills weren’t enough to fix that mess. (I have to admit my Ruby skills improved insignificantly since the time I installed Octopress, but that wasn’t enough to help in this case.)

I’ve spent some time during this DebCamp to migrate to Pelican, which is written in Python, packaged in Debian, and its dependencies are quite straighforward to install. I had to install (and write) a few plugins to make the migration easier, and port my custom Octopress Bootstrap theme to Pelican.

I no longer include any scripts from Twitter or Facebook (I made Tweet and Share button static links), and the Disqus comments are loaded only on demand, so reading this blog will respect your privacy better than before.

See you at MiniDebConf tomorrow!

Joachim Breitner: Proof reuse in Coq using existential variables

18 May, 2018 - 19:51

This is another technical post that is only of interest only to Coq users.

TL;DR: Using existential variable for hypotheses allows you to easily refactor a complicated proof into an induction schema and the actual proofs.

Setup

As a running example, I will use a small theory of “bags”, which you can think of as lists represented as trees, to allow an O(1) append operation:

Require Import Coq.Arith.Arith.
Require Import Psatz.
Require FunInd.

(* The data type *)
Inductive Bag a : Type :=
  | Empty : Bag a
  | Unit  : a -> Bag a
  | Two   : Bag a -> Bag a -> Bag a.

Arguments Empty {_}.
Arguments Unit {_}.
Arguments Two {_}.

Fixpoint length {a} (b : Bag a) : nat :=
  match b with
  | Empty     => 0
  | Unit _    => 1
  | Two b1 b2 => length b1 + length b2
  end.

(* A smart constructor that ensures that a [Two] never
   has [Empty] as subtrees. *)
Definition two {a} (b1 b2 : Bag a) : Bag a := match b1 with
  | Empty => b2
  | _ => match b2 with | Empty => b1
                       | _ => Two b1 b2 end end.

Lemma length_two {a} (b1 b2 : Bag a) :
  length (two b1 b2) = length b1 + length b2.
Proof. destruct b1, b2; simpl; lia. Qed.

(* A first non-trivial function *)
Function take {a : Type} (n : nat) (b : Bag a) : Bag a :=
  if n =? 0
  then Empty
  else match b with
       | Empty     => b
       | Unit x    => b
       | Two b1 b2 => two (take n b1) (take (n - length b1) b2)
       end.
The theorem

The theorem that I will be looking at in this proof describes how length and take interact:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).

Before I dive into it, let me point out that this example itself is too simple to warrant the techniques that I will present in this post. I have to rely on your imagination to scale this up to appreciate the effect on significantly bigger proofs.

Naive induction

How would we go about proving this lemma? Surely, induction is the way to go! And indeed, this is provable using induction (on the Bag) just fine:

Proof.
  intros.
  revert n.
  induction b; intros n.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + rewrite Nat.min_0_r. reflexivity.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. lia.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

But there is a problem: A proof by induction on the Bag argument immediately creates three subgoals, one for each constructor. But that is not how take is defined, which first checks the value of n, independent of the constructor. This means that we have to do the case-split and the proof for the case n = 0 three times, although they are identical. It’s a one-line proof here, but imagine something bigger...

Proof by fixpoint

Can we refactor the proof to handle the case n = 0 first? Yes, but not with a simple invocation of the induction tactic. We could do well-founded induction on the length of the argument, or we can do the proof using the more primitive fix tactic. The latter is a bit hairy, you won’t know if your proof is accepted until you do Qed (or check with Guarded), but when it works it can yield some nice proofs.

Proof.
  intros a.
  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n. rewrite Nat.min_0_l. reflexivity.
  + destruct b.
    * rewrite Nat.min_0_r. reflexivity.
    * simpl. lia.
    * simpl. rewrite length_two, !IH. lia.
Qed.

Nice: we eliminated the duplication of proofs!

A functional induction lemma

Again, imagine that we jumped through more hoops here ... maybe some well-founded recursion with a tricky size measure and complex proofs that the measure decreases ... or maybe you need to carry around an invariant about your arguments and you have to work hard to satisfy the assumption of the induction hypothesis.

As long as you do only one proof about take, that is fine. As soon as you do a second proof, you will notice that you have to repeat all of that, and it can easily make up most of your proof...

Wouldn’t it be nice if you can do the common parts of the proofs only once, obtain a generic proof scheme that you can use for (most) proofs about take, and then just fill in the blanks?

Incidentally, the Function command provides precisely that:

take_ind
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall (n : nat) (b : Bag a), (n =? 0) = true -> P n b Empty) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> b = Empty -> P n Empty b) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> forall x : a, b = Unit x -> P n (Unit x) b) ->
       (forall (n : nat) (b : Bag a),
        (n =? 0) = false ->
        forall b1 b2 : Bag a,
        b = Two b1 b2 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

which is great if you can use Function (although not perfect -- we’d rather see n = 0 instead of (n =? 0) = true), but often Function is not powerful enough to define the function you care about.

Extracting the scheme from a proof

We could define our own take_ind' by hand, but that is a lot of work, and we may not get it right easily, and when we change out functions, there is now this big proof statement to update.

Instead, let us use existentials, which are variables where Coq infers their type from how we use them, so we don’t have to declare them. Unfortunately, Coq does not support writing just

Lemma take_ind':
  forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
  forall (IH1 : ?) (IH2 : ?) (IH3 : ?) (IH4 : ?),
  forall n b, P n b (take n b).

where we just leave out the type of the assumptions (Isabelle does...), but we can fake it using some generic technique.

We begin with stating an auxiliary lemma using a sigma type to say “there exist some assumption that are sufficient to show the conclusion”:

Lemma take_ind_aux:
  forall a (P : _ -> _ -> _ -> Prop),
  { Hs : Prop |
    Hs -> forall n (b : Bag a), P n b (take n b)
  }.

We use the [eexist tactic])(https://coq.inria.fr/refman/proof-engine/tactics.html#coq:tacv.eexists) (existential exists) to construct the sigma type without committing to the type of Hs yet.

Proof.
  intros a P.
  eexists.
  intros Hs.

This gives us an assumption Hs : ?Hs -- note the existential type. We need four of those, which we can achieve by writing

  pose proof Hs as H1. eapply proj1 in H1. eapply proj2 in Hs.
  pose proof Hs as H2. eapply proj1 in H2. eapply proj2 in Hs.
  pose proof Hs as H3. eapply proj1 in H3. eapply proj2 in Hs.
  rename Hs into H4.

we now have this goal state:

1 subgoal
a : Type
P : nat -> Bag a -> Bag a -> Prop
H4 : ?Goal2
H1 : ?Goal
H2 : ?Goal0
H3 : ?Goal1
______________________________________(1/1)
forall (n : nat) (b : Bag a), P n b (take n b)

At this point, we start reproducing the proof of length_take: The same approach to induction, the same case splits:

  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n.
    revert b.
    refine H1.
  + rename n0 into Hnot_null.
    destruct b.
    * revert n Hnot_null.
      refine H2.
    * rename a0 into x.
      revert x n Hnot_null.
      refine H3.
    * assert (IHb1 : P n b1 (take n b1)) by apply IH.
      assert (IHb2 : P (n - length b1) b2 (take (n - length b1) b2)) by apply IH.
      revert n b1 b2 Hnot_null IHb1 IHb2.
      refine H4.
Defined. (* Important *)

Inside each case, we move all relevant hypotheses into the goal using revert and refine with the corresponding assumption, thus instantiating it. In the recursive case (Two), we assert that P holds for the subterms, by induction.

It is important to end this proofs with Defined, and not Qed, as we will see later.

In a next step, we can remove the sigma type:

Definition take_ind' a P := proj2_sig (take_ind_aux a P).

The type of take_ind' is as follows:

take_ind'
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       proj1_sig (take_ind_aux a P) ->
       forall n b, P n b (take n b)

This looks almost like an induction lemma. The assumptions of this lemma have the not very helpful type proj1_sig (take_ind_aux a P), but we can already use this to prove length_take:

Theorem length_take:
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind' with (P := fun n b r => length r = min n (length b)).
  repeat apply conj; intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

In this case I have to explicitly state P where I invoke take_ind', because Coq cannot figure out this instantiation on its own (it requires higher-order unification, which is undecidable and unpredictable). In other cases I had more luck.

After I apply take_ind', I have this proof goal:

______________________________________(1/1)
proj1_sig (take_ind_aux a (fun n b r => length r = min n (length b)))

which is the type that Coq inferred for Hs above. We know that this is a conjunction of a bunch of assumptions, and we can split it as such, using repeat apply conj. At this point, Coq needs to look inside take_ind_aux; this would fail if we used Qed to conclude the proof of take_ind_aux.

This gives me four goals, one for each case of take, and the remaining proofs really only deals with the specifics of length_take -- no more general dealing with worrying about getting the induction right and doing the case-splitting the right way.

Also note that, very conveniently, Coq uses the same name for the induction hypotheses IHb1 and IHb2 that we used in take_ind_aux!

Making it prettier

It may be a bit confusing to have this proj1_sig in the type, especially when working in a team where others will use your induction lemma without knowing its internals. But we can resolve that, and also turn the conjunctions into normal arrows, using a bit of tactic support. This is completely generic, so if you follow this procedure, you can just copy most of that:

Lemma uncurry_and: forall {A B C}, (A /\ B -> C) -> (A -> B -> C).
Proof. intros. intuition. Qed.
Lemma under_imp:   forall {A B C}, (B -> C) -> (A -> B) -> (A -> C).
Proof. intros. intuition. Qed.
Ltac iterate n f x := lazymatch n with
  | 0 => x
  | S ?n => iterate n f uconstr:(f x)
end.
Ltac uncurryN n x :=
  let n' := eval compute in n in
  lazymatch n' with
  | 0 => x
  | S ?n => let uc := iterate n uconstr:(under_imp) uconstr:(uncurry_and) in
            let x' := uncurryN n x in
            uconstr:(uc x')
end.

With this in place, we can define our final proof scheme lemma:

Definition take_ind'' a P
  := ltac:(let x := uncurryN 3 (proj2_sig (take_ind_aux a P)) in exact x).
Opaque take_ind''.

The type of take_ind'' is now exactly what we’d wish for: All assumptions spelled out, and the n =? 0 already taken of (compare this to the take_ind provided by the Function command above):

take_ind''
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall b : Bag a, P 0 b Empty) ->
       (forall n : nat, n <> 0 -> P n Empty Empty) ->
       (forall (x : a) (n : nat), n <> 0 -> P n (Unit x) (Unit x)) ->
       (forall (n : nat) (b1 b2 : Bag a),
        n <> 0 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

At this point we can mark take_ind'' as Opaque, to hide how we obtained this lemma.

Our proof does not change a lot; we merely no longer have to use repeat apply conj:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind'' with (P := fun n b r => length r = min n (length b)); intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.
Is it worth it?

It was in my case: Applying this trick in our ongoing work of verifying parts of the Haskell compiler GHC separated a somewhat proof into a re-usable proof scheme (go_ind), making the actual proofs (go_all_WellScopedFloats, go_res_WellScoped) much neater and to the point. It saved “only” 60 lines (if I don’t count the 20 “generic” lines above), but the pay-off will increase as I do even more proofs about this function.

Martín Ferrari: MiniDebConf Hamburg - Thursday

18 May, 2018 - 16:43
MiniDebCamp Hamburg - Thursday 17/5

I missed my flight on Wednesday, and for a moment I thought I would have to cancel my attendance, but luckily I was able to buy a ticket for Thursday for a good price.

I arrived at the venue just in time for a "stand-up" meeting, where people introduced themselves and shared what are they working on / planning to work on. That gave me a great feeling, having an idea of what other people are doing, and gave me motivation to work on my projects.

The venue seems to be some kind of cooperative, with office space for different associations, there is also a small guest house (where I am sleeping), and a "kantina". The building seems very pretty, but is going through some renovations, so the scaffolding does not let you see it much. It also has a big outdoors area, which is always welcomed.

I had a good chat about mapping support in IkiWiki, so my rewrite of the OSM plugin might get some users even before it is completely merged!

I also worked for a while on Prometheus packages, I am hoping to finally get a new version of prometheus-alertmanager packaged soon.

I realised I still had some repos in my home directory in alioth, so I moved these away to salsa. On the same vein, I started discussions about migrating my data-collection scripts for contributors.d.o to salsa; this is quite important if we want to keep contributors being relevant and useful.

Comment

Olivier Berger: Virtualized lab demonstration using a tweaked Labtainers running in a container

18 May, 2018 - 15:39

I’ve recorded a screencast: Labtainers in docker demonstration (embedded below) demonstrating how I’ve tweaked Labtainers so as to run it inside its own Docker container.

I’m currently very much excited by the Labtainers framework for delivering virtual labs, for instance in the context of MOOCs.

Labtainers is quite interesting as it allows isolating a lab in several containers running in their own dedicated virtual network, which helps distributing a lab without needing to install anything locally.

My tweak allows to run what I called the “master” container which contains the labtainers scripts, instead of having to install labtainers on a Linux host. This should help installation and distribution of labtainers, as well as deploying it on cloud platforms, some day soon. In the meantime Labtainer containers of the labs run with privileges so it’s advised to be careful, and running the whole of these containers in a VM may be safer. Maybe Labtainers will evolve in the future to integrate a containerization of its scripts. My patches are pending, but the upstream authors are currently focused on some other priorities.

Another interesting property of labtainers that is shown in the demo is the auto-grading feature that uses traces of what was performed inside the lab environment by the student, to evaluate the activities. Here, the telnetlab that I’ve shown, is evaluated by looking at text input on the command line or messages appearing on the stdout or in logs : the student launched both telnet or ssh, some failed login appeared, etc.

However, the demo is a bit confusing, in that I recorded a second lab execution whereas I had previously attempted a first try at the same telnetlab. In labtainers, traces of execution can accumulate : the student wil make a first attempt, and restart later, before sending it all to the professor (unless a redo.py is issued). This explanes that the  grading appears to give a different result than what I performed in the screencast.

Stay tuned for more news about my Labtainers adventures.

P.S. thanks to labtainers authors, and obs-studio folks for the screencast recording tool

Louis-Philippe Véronneau: Running systemd in the Gitlab docker runner CI

18 May, 2018 - 11:00

At the DebConf videoteam, we use ansible to manage our machines. Last fall in Cambridge, we migrated our repositories on salsa.debian.org and I started playing with the Gitlab CI. It's pretty powerful and helped us catch a bunch of errors we had missed.

As it was my first time playing with continuous integration and docker, I had trouble when our playbooks used systemd in a way or another and I couldn't figure out a way to have systemd run in the Gitlab docker runner.

Fast forward a few months and I lost another day and a half working on this issue. I haven't been able to make it work (my conclusion is that it's not currently possible), but I thought I would share what I learned in the process with others. Who knows, maybe someone will have a solution!

10 steps to failure

I first stated by creating a privileged Gitlab docker runner on a machine that is dedicated to running Gitlab CI runners. To run systemd in docker you either need to run privileged docker instances or to run them with the --add-cap=SYS_ADMIN permission.

If you were trying to run a docker container that runs with systemd directly, you would do something like:

$ docker run -it --cap-add SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro debian-systemd

I tried replicating this behavior with the Gitlab runner by mounting the right volumes in the runner and giving it the right cap permissions.

The thing is, normally your docker container runs a entrypoint command such as CMD ["/lib/systemd/systemd"]. To run its CI scripts, the Gitlab runner takes that container but replaces the entrypoint command by:

sh -c 'if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [ -x /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh \nelif [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ]; then\n\texec /bin/sh \nelse\n\techo shell not found\n\texit 1\nfi\n\n'

That is to say, it tries to run bash.

If you try to run commands that require systemd such as systemctl status, you'll end up with this error message since systemd is not running:

Failed to get D-Bus connection: Operation not permitted

Trying to run systemd manually once the container has been started won't work either, since systemd needs to be PID 1 in order to work (and PID 1 is bash). You end up with this error:

Trying to run as user instance, but the system has not been booted with systemd.

At this point, I came up with a bunch of creative solutions to try to bypass Gitlab's entrypoint takeover. Turns out you can tell the Gitlab runner to override the container's entrypoint with your own. Sadly, the runner then appends its long bash command right after.

For example, if you run a job with this gitlab-ci entry:

image:
  name: debian-systemd
  entrypoint: "/lib/systemd/systemd"
script:
- /usr/local/bin/my-super-script

You will get this entrypoint:

/lib/systemd/systemd sh -c 'if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [ -x /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh \nelif [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ]; then\n\texec /bin/sh \nelse\n\techo shell not found\n\texit 1\nfi\n\n'

This obviously fails. I then tried to be clever and use this entrypoint: ["/lib/systemd/systemd", "&&"]. This does not work either, since docker requires the entrypoint to be only one command.

Someone pointed out to me that you could try to use exec /lib/systemd/systemd to PID 1 bash by systemd, but that also fails with an error telling you the system has not been booted with systemd.

One more level down

Since it seems you can't run systemd in the Gitlab docker runner directly, why not try to run systemd in docker in docker (dind)? dind is used quite a lot in the Gitlab CI to build containers, so we thought it might work.

Sadly, we haven't been able to make this work either. You need to mount volumes in docker to run systemd properly and it seems docker doesn't like to mount volumes from a docker container that already have been mounted from the docker host... Ouf.

If you have been able to run systemd in the Gitlab docker runner, please contact me!

Paths to explore

The only Gitlab runner executor I've used at the moment is the docker one, since it's what most Gitlab instances run. I have no experience with it, but since there is also an LXC executor, it might be possible to run Gitlab CI tests with systemd this way.

Louis-Philippe Véronneau: Join us in Hamburg for the Hamburg Mini-DebConf!

18 May, 2018 - 11:00

Thanks to Debian, I have the chance to be able to attend the Hamburg Mini-DebConf, taking place in Hamburg from May 16th to May 20th. We are hosted by Dock Europe in the amazing Viktoria Kaserne building.

As always, the DebConf videoteam has been hard at work! Our setup is working pretty well and we only have minor fixes to implement before the conference starts.

For those of you who couldn't attend the mini-conf, you can watch the live stream here. Videos will be uploaded shortly after to the DebConf video archive.

Norbert Preining: Docker, cron, environment variables, and Kubernetes

18 May, 2018 - 08:04

I recently mentioned that I am running cron in some of the docker containers I need for a new service. Now that we moved to Kubernetes and Rancher for deployment, I moved most of the configuration into Kubernetes ConfigMaps, and expose the key/value pairs their as environment variables. Sounded like a good idea, but …

but well, reading the documentation would have helped. Cron scripts do not see the normal environment of the surrounding process (cron, init, whatever), but get a cleaned up environment. As a consequence, none of the configuration keys available in the environment did show up in the cron jobs – and as a consequence they badly failed of course

After some thinking and reading, I came up with two solutions, one “Dirty Harry^WNorby” solution and one clean and nice, but annoying solution.

Dirty Harry^WNorby solution

What is available in the environment of the cron jobs is minimal, and in fact more or less what is defined in /etc/environment and stuff set by the shell (if it is a shell script). So the solution was adding the necessary variable definitions to /etc/environment in a way that they are properly set. For that, I added the following code in the start-syslog-cron script that is the entry point of the container:

# prepare for export of variables to cron jobs
if [ -r /env-vars-to-be-exported ]
then
  for i in `cat /env-vars-to-be-exported`
  do
    echo "$i=${!i}" >> /etc/environment
  done
fi

Meaning, if the container contains a file /env-vars-to-be-exported then the lines of it are considered variable names and are set in the environment with the respective values at the time of invocation.

Using this quick and dirty trick it is now dead-easy to get the ConfigMap variables into the cron job’s environment by adding the necessary variables names to the file /env-vars-to-be-exported. Thus, no adaption of the original source code was necessary – a big plus!

Be warned, there is no error checking etc, so one can mess up the container quite easily

Standards solution

The more standard and clean solution is mounting the ConfigMap and reading the values from the exported files. This is possible, has the big advantage that one can change the values without restarting the containers (mounted ConfigMaps are updated when the ConfigMaps are changed – besides a few corner cases), and no nasty trickery in the initialization.

Disadvantage is that the code of the cron jobs needs to be changed to read the variables from the config files instead of the environment.

Antoine Beaupré: Securing the container image supply chain

18 May, 2018 - 00:00

This article is part of a series on KubeCon Europe 2018.

KubeCon EU "Security is hard" is a tautology, especially in the fast-moving world of container orchestration. We have previously covered various aspects of Linux container security through, for example, the Clear Containers implementation or the broader question of Kubernetes and security, but those are mostly concerned with container isolation; they do not address the question of trusting a container's contents. What is a container running? Who built it and when? Even assuming we have good programmers and solid isolation layers, propagating that good code around a Kubernetes cluster and making strong assertions on the integrity of that supply chain is far from trivial. The 2018 KubeCon + CloudNativeCon Europe event featured some projects that could eventually solve that problem.

Image provenance

A first talk, by Adrian Mouat, provided a good introduction to the broader question of "establishing image provenance and security in Kubernetes" (video, slides [PDF]). Mouat compared software to food you get from the supermarket: "you can actually tell quite a lot about the product; you can tell the ingredients, where it came from, when it was packaged, how long it's good for". He explained that "all livestock in Europe have an animal passport so we can track its movement throughout Europe and beyond". That "requires a lot of work, and time, and money, but we decided that this is was worthwhile doing so that we know [our food is] safe to eat. Can we say the same thing about the software running in our data centers?" This is especially a problem in complex systems like Kubernetes; containers have inherent security and licensing concerns, as we have recently discussed.

You should be able to easily tell what is in a container: what software it runs, where it came from, how it was created, and if it has any known security issues, he said. Mouat also expects those properties to be provable and verifiable with strong cryptographic assertions. Kubernetes can make this difficult. Mouat gave a demonstration of how, by default, the orchestration framework will allow different versions of the same container to run in parallel. In his scenario, this is because the default image pull policy (ifNotPresent) might pull a new version on some nodes and not others. This problem arises because of an inconsistency between the way Docker and Kubernetes treat image tags; the former as mutable and the latter as immutable. Mouat said that "the default semantics for pulling images in Kubernetes are confusing and dangerous." The solution here is to deploy only images with tags that refer to a unique version of a container, for example by embedding a Git hash or unique version number in the image tag. Obviously, changing the policy to AlwaysPullImages will also help in solving the particular issue he demonstrated, but will create more image churn in the cluster.

But that's only a small part of the problem; even if Kubernetes actually runs the correct image, how can you tell what is actually in that image? In theory, this should be easy. Docker seems like the perfect tool to create deterministic images that consist exactly of what you asked for: a clean and controlled, isolated environment. Unfortunately, containers are far from reproducible and the problem begins on the very first line of a Dockerfile. Mouat gave the example of a FROM debian line, which can mean different things at different times. It should normally refer to Debian "stable", but that's actually a moving target; Debian makes new stable releases once in a while, and there are regular security updates. So what first looks like a static target is actually moving. Many Dockerfiles will happily fetch random source code and binaries from the network. Mouat encouraged people to at least checksum the downloaded content to prevent basic attacks and problems.

Unfortunately, all this still doesn't get us reproducible builds since container images include file timestamps, build identifiers, and image creation time that will vary between builds, making container images hard to verify through bit-wise comparison or checksums. One solution there is to use alternative build tools like Bazel that allow you to build reproducible images. Mouat also added that there is "tension between reproducibility and keeping stuff up to date" because using hashes in manifests will make updates harder to deploy. By using FROM debian, you automatically get updates when you rebuild that container. Using FROM debian:stretch-20180426 will get you a more reproducible container, but you'll need to change your manifest regularly to follow security updates. Once we know what is in our container, there is at least a standard in the form of the OCI specification that allows attaching annotations to document the contents of containers.

Another problem is making sure containers are up to date, a "weirdly hard" question to answer according to Mouat: "why can't I ask my registry [if] there is new version of [a] tag, but as far as I know, there's no way you can do that." Mouat literally hand-waved at a slide showing various projects designed to scan container images for known vulnerabilities, introducing Aqua, Clair, NeuVector, and Twistlock. Mouat said we need a more "holistic" solution than the current whack-a-mole approach. His company is working on such a product called Trow, but not much information about it was available at the time of writing.

The long tail of the supply chain

Verifying container images is exactly the kind of problem Notary is designed to solve. Notary is a server "that allows anyone to have trust over arbitrary collections of data". In practice, that can be used by the Docker daemon as an additional check before fetching images from the registry. This allows operators to approve images with cryptographic signatures before they get deployed in the cluster.

Notary implements The Update Framework (TUF), a specification covering the nitty-gritty details of signatures, key rotation, and delegation. It keeps signed hashes of container images that can be used for verification; it can be deployed by enabling Docker's "content Trust" in any Docker daemon, or by configuring a custom admission controller with a web hook in Kubernetes. In another talk (slides [PDF], video) Liam White and Michael Hough covered the basics of Notary's design and how it interacts with Docker. They also introduced Porteiris as an admission controller hook that can implement a policy like "allow any image from the LWN Docker registry as long as it's signed by your favorite editor". Policies can be scoped by namespace as well, which can be useful in multi-tenant clusters. The downside of Porteris is that it supports only IBM Cloud Notary servers because the images need to be explicitly mapped between the Notary server and the registry. The IBM team knows only about how to map its own images but the speakers said they were open to contributions there.

A limitation of Notary is that it looks only at the last step of the build chain; in itself, it provides no guarantees on where the image comes from, how the image was built, or what it's made of. In yet another talk (slides [PDF] video), Wendy Dembowski and Lukas Puehringer introduced a possible solution to that problem: two projects that work hand-in-hand to provide end-to-end verification of the complete container supply chain. Puehringer first introduced the in-toto project as a tool to authenticate the integrity of individual build steps: code signing, continuous integration (CI), and deployment. It provides a specification for "open and extensible" metadata that certifies how each step was performed and the resulting artifacts. This could be, at the source step, as simple as a Git commit hash or, at the CI step, a build log and artifact checksums. All steps are "chained" as well, so that you can track which commit triggered the deployment of a specific image. The metadata is cryptographically signed by role keys to provide strong attestations as to the provenance and integrity of each step. The in-toto project is supervised by Justin Cappos, who also works on TUF, so it shares some of its security properties and integrates well with the framework. Each step in the build chain has its own public/private key pair, with support for role delegation and rotation.

In-toto is a generic framework allowing a complete supply chain verification by providing "attestations" that a given artifact was created by the right person using the right source. But it does not necessarily provide the hooks to do those checks in Kubernetes itself. This is where Grafeas comes in, by providing a global API to read and store metadata. That can be package versions, vulnerabilities, license or vulnerability scans, builds, images, deployments, and attestations such as those provided by in-toto. All of those can then be used by the Kubernetes admission controller to establish a policy that regulates image deployments. Dembowski referred to this tutorial by Kelsey Hightower as an example configuration to integrate Grafeas in your cluster. According to Puehringer: "It seems natural to marry the two projects together because Grafeas provides a very well-defined API where you can push metadata into, or query from, and is well integrated in the cloud ecosystem, and in-toto provides all the steps in the chain."

Dembowski said that Grafeas is already in use at Google and it has been found useful to keep track of metadata about containers. Grafeas can keep track of what each container is running, who built it, when (sometimes vulnerable) code was deployed, and make sure developers do not ship containers built on untrusted development machines. This can be useful when a new vulnerability comes out and administrators scramble to figure out if or where affected code is deployed.

Puehringer explained that in-toto's reference implementation is complete and he is working with various Linux distributions to get them to use link metadata to have their package managers perform similar verification.

Conclusion

The question of container trust hardly seems resolved at all; the available solutions are complex and would be difficult to deploy for Kubernetes rookies like me. However, it seems that Kubernetes could make small improvements to improve security and auditability, the first of which is probably setting the image pull policy to a more reasonable default. In his talk, Mouat also said it should be easier to make Kubernetes fetch images only from a trusted registry instead of allowing any arbitrary registry by default.

Beyond that, cluster operators wishing to have better control over their deployments should start looking into setting up Notary with an admission controller, maybe Portieris if they can figure out how to make it play with their own Notary servers. Considering the apparent complexity of Grafeas and in-toto, I would assume that those would probably be reserved only to larger "enterprise" deployments but who knows; Kubernetes may be complex enough as it is that people won't mind adding a service or two in there to improve its security. Keep in mind that complexity is an enemy of security, so operators should be careful when deploying solutions unless they have a good grasp of the trade-offs involved.

This article first appeared in the Linux Weekly News.

Jonathan McDowell: Home Automation: Raspberry Pi as MQTT temperature sensor

17 May, 2018 - 03:16

After setting up an MQTT broker I needed some data to feed it. It made sense to start basic and gradually build up bits and pieces that would form a bigger home automation setup. As it happened I have an old Raspberry Pi B (original rev 1 [2 if you look at /proc/cpuinfo] with 256MB RAM) and some DS18B20 1-Wire temperature sensors lying around, so I decided to make a heavyweight temperature sensor (long term I’m hoping to do something with some ESP8266s).

There are plenty of guides out there about hooking up the DS18B20 to the Pi; Adafruit has a reasonable one. The short version is that GPIO4 can be easily configured to be a 1-Wire bus and you hook the DS18B20 up with a 4k7Ω resistor across the data + 3v3 power pins. An initial check can be performed by enabling the DT overlay on the fly:

sudo dtoverlay w1-gpio

Detection of 1-Wire devices is automatic so you should see an entry in dmesg looking like:

w1_master_driver w1_bus_master1: Attaching one wire slave 28.012345678abcd crc ef

You can then do

$ cat /sys/bus/w1/devices/28-*/w1_slave
1e 01 4b 46 7f ff 0c 10 18 : crc=18 YES
1e 01 4b 46 7f ff 0c 10 18 t=17875

Which shows a current temperature of 17.875°C in my sudy. Once that’s working (and you haven’t swapped GND and DATA like I did on the first go) you can make the Pi bootup with 1-Wire enabled by adding a dtoverlay=w1-gpio line to /boot/config.txt. The next step is to get that fed into the MQTT broker. A simple Python client seemed like the right approach. Debian has paho-mqtt but sadly not in a stable release. Thankfully the python3-paho-mqtt 1.3.1-1 package in testing installed just fine on the Raspbian stretch image my Pi is running. I dropped the following in /usr/locals/bin/mqtt-temp:

#!/usr/bin/python3

import glob
import time
import paho.mqtt.publish as publish

Broker = 'mqtt-host'
auth = {
    'username': 'user2',
    'password': 'bar',
}

pub_topic = 'test/temperature'

base_dir = '/sys/bus/w1/devices/'
device_folder = glob.glob(base_dir + '28-*')[0]
device_file = device_folder + '/w1_slave'

def read_temp():
    valid = False
    temp = 0
    with open(device_file, 'r') as f:
        for line in f:
            if line.strip()[-3:] == 'YES':
                valid = True
            temp_pos = line.find(' t=')
            if temp_pos != -1:
                temp = float(line[temp_pos + 3:]) / 1000.0

    if valid:
        return temp
    else:
        return None


while True:
    temp = read_temp()
    if temp is not None:
        publish.single(pub_topic, str(temp),
                hostname=Broker, port=8883,
                auth=auth, tls={})
    time.sleep(60)

And finished it off with a systemd unit file - I know a lot of people complain about systemd, but it really does make it easy to just spin up a minimal service as a unique non-privileged user. The following went in /etc/systemd/system/mqtt-temp.service:

[Unit]
Description=MQTT Temperature sensor
After=network.target

[Service]
# Hack because Python can't cope with a DynamicUser with no HOME
Environment="HOME=/"
ExecStart=/usr/local/sbin/mqtt-temp

DynamicUser=yes
MemoryDenyWriteExecute=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

Start it up and enable for subsequent reboots:

systemctl start mqtt-temp
systemctl enable mqtt-temp

And then watch on my Debian test box as before:

$ mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo
test/temperature 17.875
test/temperature 17.937

Jonathan Carter: Video Channel Updates

17 May, 2018 - 01:19

Last month, I started doing something that I’ve been meaning to do for years, and that’s to start a video channel and make some free software related videos.

I started out uploading to my YouTube channel which has been dormant for a really long time, and then last week, I also uploaded my videos to my own site, highvoltage.tv. It’s a MediaDrop instance, a video hosting platform written in Python.

I’ll still keep uploading to YouTube, but ultimately I’d like to make my self-hosted site the primary source for my content. Not sure if I’ll stay with MediaDrop, but it does tick a lot of boxes, and if its easy enough to extend, I’ll probably stick with it. MediaDrop might also be a good platform for viewing the Debian meetings videos like the DebConf videos. 

My current topics are very much Debian related, but that doesn’t exclude any other types of content from being included in the future. Here’s what I have so far:

  • Video Logs: Almost like a blog, in video format.
  • Howto: Howto videos.
  • Debian Package of the Day: Exploring packages in the Debian archive.
  • Debian Package Management: Howto series on Debian package management, a precursor to a series that I’ll do on Debian packaging.
  • What’s the Difference: Just comparing 2 or more things.
  • Let’s Internet: Read stuff from Reddit, Slashdot, Quora, blogs and other media.

It’s still early days and there’s a bunch of ideas that I still want to implement, so the content will hopefully get a lot better as time goes on.

I have also quit Facebook last month, so I dusted off my old Mastodon account and started posting there again: https://mastodon.xyz/@highvoltage

You can also subscribe to my videos via RSS: https://highvoltage.tv/latest.xml

Other than that I’m open to ideas, thanks for reading :)

Antoine Beaupré: Updates in container isolation

17 May, 2018 - 00:00

This article is part of a series on KubeCon Europe 2018.

KubeCon EU At KubeCon + CloudNativeCon Europe 2018, several talks explored the topic of container isolation and security. The last year saw the release of Kata Containers which, combined with the CRI-O project, provided strong isolation guarantees for containers using a hypervisor. During the conference, Google released its own hypervisor called gVisor, adding yet another possible solution for this problem. Those new developments prompted the community to work on integrating the concept of "secure containers" (or "sandboxed containers") deeper into Kubernetes. This work is now coming to fruition; it prompts us to look again at how Kubernetes tries to keep the bad guys from wreaking havoc once they break into a container.

Attacking and defending the container boundaries

Tim Allclair's talk (slides [PDF], video) was all about explaining the possible attacks on secure containers. To simplify, Allclair said that "secure is isolation, even if that's a little imprecise" and explained that isolation is directional across boundaries: for example, a host might be isolated from a guest container, but the container might be fully visible from the host. So there are two distinct problems here: threats from the outside (attackers trying to get into a container) and threats from the inside (attackers trying to get out of a compromised container). Allclair's talk focused on the latter. In this context, sandboxed containers are concerned with threats from the inside; once the attacker is inside the sandbox, they should not be able to compromise the system any further.

Attacks can take multiple forms: untrusted code provided by users in multi-tenant clusters, un-audited code fetched from random sites by trusted users, or trusted code compromised through an unknown vulnerability. According to Allclair, defending a system from a compromised container is harder than defending a container from external threats, because there is a larger attack surface. While outside attackers only have access to a single port, attackers on the inside often have access to the kernel's extensive system-call interface, a multitude of storage backends, the internal network, daemons providing services to the cluster, hardware interfaces, and so on.

Taking those vectors one by one, Allclair first looked at the kernel and said that there were 169 code execution vulnerabilities in the Linux kernel in 2017. He admitted this was a bit of fear mongering; it indeed was a rather unusual year and "most of those were in mobile device drivers". These vulnerabilities are not really a problem for Kubernetes unless you run it on your phone. Allclair said that at least one attendee at the conference was probably doing exactly that; as it turns out, some people have managed to run Kubernetes on a vacuum cleaner. Container runtimes implement all sorts of mechanisms to reduce the kernel's attack surface: Docker has seccomp profiles, but Kubernetes turns those off by default. Runtimes will use AppArmor or SELinux rule sets. There are also ways to run containers as non-root, which was the topic of a pun-filled separate talk as well. Unfortunately, those mechanisms do not fundamentally solve the problem of kernel vulnerabilities. Allclair cited the Dirty COW vulnerability as a classic example of a container escape through race conditions on system calls that are allowed by security profiles.

The proposed solution to this problem is to add a second security boundary. This is apparently an overarching principle at Google, according to Allclair: "At Google, we have this principle security principle that between any untrusted code and user data there have to be at least two distinct security boundaries so that means two independent security mechanisms need to fail in order to for that untrusted code to get out that user data."

Adding another boundary makes attacks harder to accomplish. One such solution is to use a hypervisor like Kata Containers or gVisor. Those new runtimes depend on a sandboxed setting that is still in the proposal stage in the Kubernetes API.

gVisor as an extra boundary

Let's look at gVisor as an example hypervisor. Google spent five years developing the project in the dark before sharing it with the world. At KubeCon, it was introduced in a keynote and a more in-depth talk (slides [PDF], video) by Dawn Chen and Zhengyu He. gVisor is a user-space kernel that implements a subset of the Linux kernel API, but which was written from scratch in Go. The idea is to have an independent kernel that reduces the attack surface; while the Linux kernel has 20 million lines of code, at the time of writing gVisor only has 185,000, which should make it easier to review and audit. It provides a cleaner and simpler interface: no hardware drivers, interrupts, or I/O port support to implement, as the host operating system takes care of all that mess.

As we can see in the diagram above (taken from the talk slides), gVisor has a component called "sentry" that implements the core of the system-call logic. It uses ptrace() out of the box for portability reasons, but can also work with KVM for better security and performance, as ptrace() is slow and racy. Sentry can use KVM to map processes to CPUs and provide lower-level support like privilege separation and memory-management. He suggested thinking of gVisor as a "layered solution" to provide isolation, as it also uses seccomp filters and namespaces. He explained how it differed from user-mode Linux (UML): while UML is a port of Linux to user space, gVisor actually reimplements the Linux system calls (211 of the 319 x86-64 system calls) using only 64 system calls in the host system. Another key difference from other systems, like unikernels or Google's Native Client (NaCL), is that it can run unmodified binaries. To fix classes of attacks relying on the open() system call, gVisor also forbids any direct filesystem access; all filesystem operations go through a second process called the gopher that enforces access permissions, in another example of a double security boundary.

According to He, gVisor has a 150ms startup time and 15MB overhead, close to Kata Containers startup times, but smaller in terms of memory. He said the approach is good for small containers in high-density workloads. It is not so useful for trusted images (because it's not required), workloads that make heavy use of system calls (because of the performance overhead), or workloads that require hardware access (because that's not available at all). Even though gVisor implements a large number of system calls, some functionality is missing. There is no System V shared memory, for example, which means PostgreSQL does not work under gVisor. A simple ping might not work either, as gVisor lacks SOCK_RAW support. Linux has been in use for decades now and is more than just a set of system calls: interfaces like /proc and sysfs also make Linux what it is. ~~gVisor implements none of those~~ Of those, gVisor only implements a subset of /proc currently, with the result that some containers will not work with gVisor without modification, for now.

As an aside, the new hypervisor does allow for experimentation and development of new system calls directly in user space. The speakers confirmed this was another motivation for the project; the hope is that having a user-space kernel will allow faster iteration than working directly in the Linux kernel.

Escape from the hypervisor

Of course, hypervisors like gVisor are only a part of the solution to pod security. In his talk, Allclair warned that even with a hypervisor, there are still ways to escape a container. He cited the CVE-2017-1002101 vulnerability, which allows hostile container images to take over a host through specially crafted symbolic links. Like native containers, hypervisors like Kata Containers also allow the guest to mount filesystems across the container boundary, so they are vulnerable to such an attack.

Kubernetes fixed that specific bug, but a general solution is still in the design phase. Allclair said that ephemeral storage should be treated as opaque to the host, making sure that the host never interacts directly with image files and just passes them down to the guest untouched. Similarly, runtimes should "mount block volumes directly into the sandbox, not onto the host". Network filesystems are trickier; while it's possible to mount (say) a Ceph filesystem in the guest, that means the access credentials now reside within the guest, which moves the security boundary into the untrusted container.

Allclair outlined networking as another attack vector: Kubernetes exposes a lot of unauthenticated services on the network by default. In particular, the API server is a gold mine of information about the cluster. Another attack vector is untrusted data flows from containers to the user. For example, container logs travel through various Kubernetes components, and some components, like Fluentd, will end up parsing those logs directly. Allclair said that many different programs are "looking at untrusted data; if there's a vulnerability there, it could lead to remote code execution". When he looked at the history of vulnerabilities in that area, he could find no direct code execution, but "one of the dependencies in Fluentd for parsing JSON has seven different bugs with segfault issues so we can see that could lead to a memory vulnerability". As a possible solution to such issues, Allclair proposed isolating components in their own (native, as opposed to sandboxed) containers, which might be sufficient because Fluentd acts as a first trusted boundary.

Conclusion

A lot of work is happening to improve what is widely perceived as defective container isolation in the Linux kernel. Some take the approach of trying to run containers as regular users ("root-less containers") and rely on the Linux kernel's user-isolation properties. Others found this relies too much on the security of the kernel and use separate hypervisors, like Kata Containers and gVisor. The latter seems especially interesting because it is lightweight and doesn't add much attack surface. In comparison, Kata Containers relies on a kernel running inside the container, which actually expands the attack surface instead of reducing it. The proposed API for sandboxed containers is currently experimental in the containerd and CRI-O projects; Allclair expects the API to ship in alpha as part the Kubernetes 1.12 release.

It's important to keep in mind that hypervisors are not a panacea: they do not support all workloads because of compatibility and performance issues. A hypervisor is only a partial solution; Allclair said the next step is to provide hardened interfaces for storage, logging, and networking and encouraged people to get involved in the node special interest group and the proposal [Google Docs] on the topic.

This article first appeared in the Linux Weekly News.

Shirish Agarwal: FOSS game community slump and question about getting images in palepeli

16 May, 2018 - 22:16

There is a thread in freegamedev.net which I have been following for the past few weeks.

In the back-and-forth argument, there I believe most of the arguments shared were somewhat wrong.

While we have AAA projects like 0ad and others, the mainstay of our games should be ones which doesn’t need any high-quality textures and still does the work.

I have been looking at a Let’s play playlist of an indie gem called ‘Dead in Vinland’

Youtube playlist link – https://www.youtube.com/playlist?list=PL8eo1fAeRpyl47_TRuQrtkBn2bNRaZtlT

If you look at the game, it doesn’t have much in terms of animation apart from bits of role-playing in encounters but is more oriented towards towards roll of dice.

The characters in the game are sort of cut-out characters very much like the cut-out cardboard/paperboard characters that we used to play whild children.

Where the game innovates is more in terms of an expansive dialog-tree and at the most 100-200 images of the characters. It basically has a group of 4 permanent characters, any one of them dies and the gamer is defeated.

If anybody has played rogue or any of the games in the new debian games-rogue tasks they would know that foss shows variety of stories even in the rpg world.

$ aptitude show games-rogue
Package: games-rogue
Version: 2.3
State: installed
Automatically installed: no
Priority: optional
Section: metapackages
Maintainer: Debian Games Team
Architecture: all
Uncompressed Size: 24.6 k
Depends: games-tasks (= 2.3)
Recommends: allure, angband, crawl, gearhead, gearhead2, hearse, hyperrogue, lambdahack, meritous, moria, nethack-x11, omega-rpg, slashem
Description: Debian's roguelike games
This metapackage will install dungeon crawling games in the spirit of Rogue.
Homepage: https://blends.debian.org/games/tasks/

I took rpg those are the kind of games I have liked always and even in that turn-based rpg’s although do hate the fact that I can’t save scum as in most traditional rpg’s.

Variety and Palapeli

I now turn my attention to a package called variety.

While looking at it, also pushed a wishlist bug for packaging the new version which would fix a bug I reported about a month back. It was in the same conversation that I came to know that the upstream releaser and the downstream maintainer is one and the same.

I have also been talking with upstream about various features or what could be done to make variety better.

Now while that’s all well and good and variety does a good job of being a wallpaper changer, I need and want the wallpapers which is the output of variety to be the input of palapeli BUT without having to do lots of manual intervention as it currently requires. Variety does a pretty good job of giving good quality wallpapers and goes beyond in even giving pretty much info. on the images as it saves the metadata of the images unlike many online image services. To make things easier for myself. I made a directory called Variety in Pictures and copied everything from ~/.config/variety/Favorites by doing –

~/.config/variety/Favorites$ cp -r --preserve . /home/shirish/Pictures/Variety/

And this method works out fine enough. The –preserve hook is essential as can be seen from the cp manpage –

--preserve[=ATTR_LIST]
preserve the specified attributes (default: mode,ownership,timestamps), if possible additional
attributes: context, links, xattr, all

Even the metadata of the image is pretty good enough as can be seen from any random picture –

~/Pictures/Variety$ exiftool 7527881664_024e44f8bf_o.jpg
ExifTool Version Number : 10.96
File Name : 7527881664_024e44f8bf_o.jpg
Directory : .
File Size : 1175 kB
File Modification Date/Time : 2018:04:09 03:56:15+05:30
File Access Date/Time : 2018:05:14 22:59:22+05:30
File Inode Change Date/Time : 2018:05:15 08:01:42+05:30
File Permissions : rw-r--r--
File Type : JPEG
File Type Extension : jpg
MIME Type : image/jpeg
Exif Byte Order : Little-endian (Intel, II)
Image Description :
Make : Canon
Camera Model Name : Canon EOS 5D
X Resolution : 240
Y Resolution : 240
Resolution Unit : inches
Software : Adobe Photoshop Lightroom 3.6 (Windows)
Modify Date : 2012:07:08 17:58:13
Artist : Peter Levi
Copyright : Peter Levi
Exposure Time : 1/50
F Number : 7.1
Exposure Program : Aperture-priority AE
ISO : 160
Exif Version : 0230
Date/Time Original : 2011:05:03 12:17:50
Create Date : 2011:05:03 12:17:50
Shutter Speed Value : 1/50
Aperture Value : 7.1
Exposure Compensation : 0
Max Aperture Value : 4.0
Metering Mode : Multi-segment
Flash : Off, Did not fire
Focal Length : 45.0 mm
User Comment :
Focal Plane X Resolution : 3086.925795
Focal Plane Y Resolution : 3091.295117
Focal Plane Resolution Unit : inches
Custom Rendered : Normal
Exposure Mode : Auto
White Balance : Auto
Scene Capture Type : Standard
Owner Name : Tsvetan ROUSTCHEV
Serial Number : 1020707385
Lens Info : 24-105mm f/?
Lens Model : EF24-105mm f/4L IS USM
XP Comment :
Compression : JPEG (old-style)
Thumbnail Offset : 908
Thumbnail Length : 18291
XMP Toolkit : XMP Core 4.4.0-Exiv2
Creator Tool : Adobe Photoshop Lightroom 3.6 (Windows)
Metadata Date : 2012:07:08 17:58:13+03:00
Lens : EF24-105mm f/4L IS USM
Image Number : 1
Flash Compensation : 0
Firmware : 1.1.1
Format : image/jpeg
Version : 6.6
Process Version : 5.7
Color Temperature : 4250
Tint : +8
Exposure : 0.00
Shadows : 2
Brightness : +65
Contrast : +28
Sharpness : 25
Luminance Smoothing : 0
Color Noise Reduction : 25
Chromatic Aberration R : 0
Chromatic Aberration B : 0
Vignette Amount : 0
Shadow Tint : 0
Red Hue : 0
Red Saturation : 0
Green Hue : 0
Green Saturation : 0
Blue Hue : 0
Blue Saturation : 0
Fill Light : 0
Highlight Recovery : 23
Clarity : 0
Defringe : 0
Gray Mixer Red : -8
Gray Mixer Orange : -17
Gray Mixer Yellow : -21
Gray Mixer Green : -25
Gray Mixer Aqua : -19
Gray Mixer Blue : +8
Gray Mixer Purple : +15
Gray Mixer Magenta : +4
Split Toning Shadow Hue : 0
Split Toning Shadow Saturation : 0
Split Toning Highlight Hue : 0
Split Toning Highlight Saturation: 0
Split Toning Balance : 0
Parametric Shadows : 0
Parametric Darks : 0
Parametric Lights : 0
Parametric Highlights : 0
Parametric Shadow Split : 25
Parametric Midtone Split : 50
Parametric Highlight Split : 75
Sharpen Radius : +1.0
Sharpen Detail : 25
Sharpen Edge Masking : 0
Post Crop Vignette Amount : 0
Grain Amount : 0
Color Noise Reduction Detail : 50
Lens Profile Enable : 1
Lens Manual Distortion Amount : 0
Perspective Vertical : 0
Perspective Horizontal : 0
Perspective Rotate : 0.0
Perspective Scale : 100
Convert To Grayscale : True
Tone Curve Name : Medium Contrast
Camera Profile : Adobe Standard
Camera Profile Digest : 9C14C254921581D1141CA0E5A77A9D11
Lens Profile Setup : LensDefaults
Lens Profile Name : Adobe (Canon EF 24-105mm f/4 L IS USM)
Lens Profile Filename : Canon EOS-1Ds Mark III (Canon EF 24-105mm f4 L IS USM) - RAW.lcp
Lens Profile Digest : 0387279C5E7139287596C051056DCFAF
Lens Profile Distortion Scale : 100
Lens Profile Chromatic Aberration Scale: 100
Lens Profile Vignetting Scale : 100
Has Settings : True
Has Crop : False
Already Applied : True
Document ID : xmp.did:8DBF8F430DC9E11185FD94228EB274CE
Instance ID : xmp.iid:8DBF8F430DC9E11185FD94228EB274CE
Original Document ID : xmp.did:8DBF8F430DC9E11185FD94228EB274CE
Source URL : https://www.flickr.com/photos/93647178@N00/7527881664
Source Type : flickr
Author : peter-levi
Source Name : Flickr
Image URL : https://farm9.staticflickr.com/8154/7527881664_024e44f8bf_o.jpg
Author URL : https://www.flickr.com/photos/93647178@N00
Source Location : user:www.flickr.com/photos/peter-levi/;user_id:93647178@N00;
Creator : Peter Levi
Rights : Peter Levi
Subject : paris, france, lyon, lyonne
Tone Curve : 0, 0, 32, 22, 64, 56, 128, 128, 192, 196, 255, 255
History Action : derived, saved
History Parameters : converted from image/x-canon-cr2 to image/jpeg, saved to new location
History Instance ID : xmp.iid:8DBF8F430DC9E11185FD94228EB274CE
History When : 2012:07:08 17:58:13+03:00
History Software Agent : Adobe Photoshop Lightroom 3.6 (Windows)
History Changed : /
Derived From :
Displayed Units X : inches
Displayed Units Y : inches
Current IPTC Digest : 044f85a540ebb92b9514cab691a3992d
Coded Character Set : UTF8
Application Record Version : 4
Date Created : 2011:05:03
Time Created : 12:17:50+03:00
Digital Creation Date : 2011:05:03
Digital Creation Time : 12:17:50+03:00
By-line : Peter Levi
Copyright Notice : Peter Levi
Headline : IMG_0779
Keywords : paris, france, lyon, lyonne
Caption-Abstract :
Photoshop Thumbnail : (Binary data 18291 bytes, use -b option to extract)
IPTC Digest : d8a6ddc0b5eacb05874d4b676f4cb439
Profile CMM Type : Linotronic
Profile Version : 2.1.0
Profile Class : Display Device Profile
Color Space Data : RGB
Profile Connection Space : XYZ
Profile Date Time : 1998:02:09 06:49:00
Profile File Signature : acsp
Primary Platform : Microsoft Corporation
CMM Flags : Not Embedded, Independent
Device Manufacturer : Hewlett-Packard
Device Model : sRGB
Device Attributes : Reflective, Glossy, Positive, Color
Rendering Intent : Perceptual
Connection Space Illuminant : 0.9642 1 0.82491
Profile Creator : Hewlett-Packard
Profile ID : 0
Profile Copyright : Copyright (c) 1998 Hewlett-Packard Company
Profile Description : sRGB IEC61966-2.1
Media White Point : 0.95045 1 1.08905
Media Black Point : 0 0 0
Red Matrix Column : 0.43607 0.22249 0.01392
Green Matrix Column : 0.38515 0.71687 0.09708
Blue Matrix Column : 0.14307 0.06061 0.7141
Device Mfg Desc : IEC http://www.iec.ch
Device Model Desc : IEC 61966-2.1 Default RGB colour space - sRGB
Viewing Cond Desc : Reference Viewing Condition in IEC61966-2.1
Viewing Cond Illuminant : 19.6445 20.3718 16.8089
Viewing Cond Surround : 3.92889 4.07439 3.36179
Viewing Cond Illuminant Type : D50
Luminance : 76.03647 80 87.12462
Measurement Observer : CIE 1931
Measurement Backing : 0 0 0
Measurement Geometry : Unknown
Measurement Flare : 0.999%
Measurement Illuminant : D65
Technology : Cathode Ray Tube Display
Red Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract)
Green Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract)
Blue Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract)
DCT Encode Version : 100
APP14 Flags 0 : [14]
APP14 Flags 1 : (none)
Color Transform : YCbCr
Image Width : 1920
Image Height : 1280
Encoding Process : Baseline DCT, Huffman coding
Bits Per Sample : 8
Color Components : 3
Y Cb Cr Sub Sampling : YCbCr4:4:4 (1 1)
Aperture : 7.1
Date/Time Created : 2011:05:03 12:17:50+03:00
Digital Creation Date/Time : 2011:05:03 12:17:50+03:00
Image Size : 1920x1280
Megapixels : 2.5
Scale Factor To 35 mm Equivalent: 1.0
Shutter Speed : 1/50
Thumbnail Image : (Binary data 18291 bytes, use -b option to extract)
Circle Of Confusion : 0.030 mm
Field Of View : 43.5 deg
Focal Length : 45.0 mm (35 mm equivalent: 45.1 mm)
Hyperfocal Distance : 9.51 m
Lens ID : Canon EF 24-105mm f/4L IS USM
Light Value : 10.6

One of the things I found wanting, now I dunno whether a field for CC-SA 3.0 (Creative Commons – Share Alike 3.0) . It does have everything else to know how a particular picture is/was taken.

For the puzzle, it needs 3 info. at the very least,

a. The name of the image. For many images you need to make one up as many a times either photographers are either not imaginative or do not have the time to give meaningful descriptive name. I have had an interesting discussion on a similar topic with the authors of palapeli. There are lots to do that in getting descriptions right. At least in free software I dunno of any way to get images processed so that descriptions can be told/found out.

b. A comment – This is optional but if you have a memorable image that you have captured, this is the best way to do it. I hate to bring bing wallapers but have seen they have some of the best description for what I’m trying to say –

shirish@debian:~/Pictures/Variety$ exiftool ManateeMom_EN-US9983570199_1920x1080.jpg | grep Caption-Abstract
Caption-Abstract : West Indian manatee mom and baby at Three Sisters Springs, Florida (© James R.D. Scott/Getty Images)
shirish@debian:~/Pictures/Variety$ exiftool ManateeMom_EN-US9983570199_1920x1080.jpg | grep Comment
User Comment : West Indian manatee mom and baby at Three Sisters Springs, Florida (© James R.D. Scott/Getty Images)
XP Comment :
Comment : West Indian manatee mom and baby at Three Sisters Springs, Florida (© James R.D. Scott/Getty Images)

c. Name of the Author – Many a time I just write either ‘nameless’ or ‘unknown’ as the author hasn’t taken any pains to share who they are.

Why I have shared or talked about palapeli as they are the only ones I know of who has done some extensive works on getting unique slicers so that the shapes of the puzzle piece come all different. But it’s still a trial and error method as it doesn’t have a preview mode.

While there’s lot that could be improved in both the above projects, I would be happy just if atm there would be a script which can take the images as an input, add or fib details (although best would be to have actual details) and do the work.

I am sure if I put my mind to it, I’ll probably find ways in which even exiftool can be improved but can’t thank enough the people who have made such a wonderful tool.

For instance, while looking at the metadata of quite a few pictures, found that many people use arbitrary fields to tell where the picture was shot at, some use headline, some use comment or Description while others use subject. If somebody wanted to write a script it would be difficult as there may be images which may have all the 3 fields and have different content on them (possibly) under context known by the author only.

The good thing is we have got the latest version on Debian testing –

$ exiftool -ver
10.96

Peace out.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้