Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 52 min 42 sec ago

Jonathan Dowland: Three Minimalism reads

7 December, 2017 - 23:26

"The Life-Changing Magic of Tidying Up" by Marie Kondo is a popular (New York Times best selling) book by lifestyle consultant Mari Kondo about tidying up and decluttering. It's not strictly about minimalism, although her approach is informed by her own preferences which are minimalist. Like all self-help books, there's some stuff in here that you might find interesting or applicable to your own life, amongst other stuff you might not. Kondo believes, however, that her methods only works if you stick to them utterly.

Next is "Goodbye, Things" by Fumio Sasaki. The end-game for this book really is minimalism, but the book is structured in such a way that readers at any point on a journey to minimalism (or coinciding with minimalism, if that isn't your end-goal) can get something out of it. A large proportion of the middle of the book is given over to a general collection of short, one-page-or-less tips on decluttering, minimising, etc. You can randomly flip through this section a bit like randomly drawing a card from a deck. I started to wonder whether there's a gap in the market for an Oblique Strategies-like minimalism product. The book recommended several blogs for further reading, but they are all written in Japanese.

Finally issue #18 of New Philosopher is the "Stuff" issue and features several articles from modern Philosophers (as well as some pertinent material from classical ones) on the nature of materialism. I've been fascinated by Philosophy from a distance ever since my brother read it as an Undergraduate so I occasionally buy the philosophical equivalent of Pop Science books or magazines, but this was the most accessible for me that I've read to date.

Wouter Verhelst: Adding subtitles with FFmpeg

7 December, 2017 - 19:52

For future reference (to myself, for the most part):

ffmpeg -i foo.webm -i foo.en.vtt -i foo.nl.vtt -map 0:v -map 0:a \
  -map 1:s -map 2:s -metadata:s:a language=eng -metadata:s:s:0   \
  language=eng -metadata:s:s:1 language=nld -c copy -y           \
  foo.subbed.webm

... is one way to create a single .webm file from one .webm input file and multiple .vtt files. A little bit of explanation:

  • The -i arguments pass input files. You can have multiple input files for one output file. They are numbered internally (this is necessary for the -map and -metadata options later), starting from 0.
  • The -map options take a "mapping". With them, you specify which input streams should go where in the output stream. By default, if you have multiple streams of the same type, ffmpeg will only pick one (the "best" one, whatever that is). The mappings we specify are:

    • -map 0:v: this means to take the video stream from the first file (this is the default if you do not specify any mapping at all; but if you do specify a mapping, you need to be complete)
    • -map 0:a: take the audio stream from the first file as well (same as with the video).
    • -map 1:s: take the subtitle stream from the second (i.e., indexed 1) file.
    • -map 2:s: take the subtitle stream from the third (i.e., indexed 2) file.
  • The -metadata options set metadata on the output file. Here, we pass:

    • -metadata:s:a language=eng, to add a 's'tream metadata item on the 'a'udio stream, with name language and content eng. The language metadata in ffmpeg is special, in that it gets automatically translated to the correct way of specifying the language in the target container format.
    • -metadata:s:s:0 language=eng, to add a 's'tream metadata item on the first (indexed 0) 's'ubtitle stream in the output file. This too has the english language set
    • `-metadata:s:s:1 language=nld', to add a 's'tream metadata item on the second (indexed 1) 's'ubtitle stream in the output file. This has dutch set as the language.
  • The -c copy option tells ffmpeg to not transcode the input video data, but just to rewrite the container. This works because all input files (WebM video plus VTT subtitles) are valid for WebM. If you do not have an input subtitle format that is valid for WebM, you can instead limit the copy modifier to the video and audio only, allowing ffmpeg to transcode the subtitles. This is done by way of -c:v copy -c:a copy.
  • Finally, we pass -y to specify that any pre-existing output file should be overwritten, and the name of the output file.

Dirk Eddelbuettel: RcppArmadillo 0.8.300.1.0

7 December, 2017 - 07:59

Another RcppArmadillo release hit CRAN today. Since our last 0.8.100.1.0 release in October, Conrad kept busy and produced Armadillo releases 8.200.0, 8.200.1, 8.300.0 and now 8.300.1. We tend to now package these (with proper reverse-dependency checks and all) first for the RcppCore drat repo from where you can install them "as usual" (see the repo page for details). But this actual release resumes within our normal bi-monthly CRAN release cycle.

These releases improve a few little nags on the recent switch to more extensive use of OpenMP, and round out a number of other corners. See below for a brief summary.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 405 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.300.1.0 (2017-12-04)
  • Upgraded to Armadillo release 8.300.1 (Tropical Shenanigans)

    • faster handling of band matrices by solve()

    • faster handling of band matrices by chol()

    • faster randg() when using OpenMP

    • added normpdf()

    • expanded .save() to allow appending new datasets to existing HDF5 files

  • Includes changes made in several earlier GitHub-only releases (versions 0.8.300.0.0, 0.8.200.2.0 and 0.8.200.1.0).

  • Conversion from simple_triplet_matrix is now supported (Serguei Sokol in #192).

  • Updated configure code to check for g++ 5.4 or later to enable OpenMP.

  • Updated the skeleton package to current packaging standards

  • Suppress warnings from Armadillo about missing OpenMP support and -fopenmp flags by setting ARMA_DONT_PRINT_OPENMP_WARNING

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Markus Koschany: My Free Software Activities in November 2017

7 December, 2017 - 01:33

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games Debian Java
  • New upstream versions this month: undertow, jackrabbit, libpdfbox2, easymock, libokhttp-java, mediathekview, pdfsam, libsejda-java, libsambox-java and libnative-platform-java.
  • I updated bnd (2.4.1-7) in order to help with the removal of Eclipse from Testing. Unfortunately there is more work to do and the only way forward is to package a newer version of Eclipse and to split the package in a way, so that such issues can be avoided in the future. P.S.: We appreciate help with maintaining Eclipse! (#681726)
  • I sponsored libimglib2-java for Ghislain Antony Vaillant.
  • I fixed a regression in libmetadata-extractor-java related to relative classpaths. (#880746)
  • I spent more time on upgrading Gradle to version 3.4.1 and finally succeeded. The package is in experimental now. Upgrading from 3.2.1 to 3.4.1 didn’t seem like a big undertaking but the 8 MB debdiff and ~170000 lines of code changes proved me wrong. I discovered two regressions with this version in mockito and bnd. The former one could be resolved but bnd requires probably an upgrade as well. I would like to avoid that at the moment because major bnd upgrades tend to affect dozens of reverse-dependencies, mostly in a negative way.
  • Netbeans was affected by a regression in jaxb and failed to build from source. (#882525) I could partly revert the damage but another bug in jaxb 2.3.0 is currently preventing a complete recovery.
  • I fixed two Java 9 transition bugs in libnative-platform-java (#874645) and  jedit (#875583).
Debian LTS

This was my twenty-first month as a paid contributor and I have been paid to work 14.75 hours (13 +1.75 from October) on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1177-1. Issued a security update for poppler fixing 4 CVE.
  • DLA-1178-1. Issued a security update for opensaml2 fixing 1 CVE.
  • DLA-1179-1. Issued a security update for shibboleth-sp2 fixing 1 CVE.
  • DLA-1180-1. Issued a security update for libspring-ldap-java fixing 1 CVE.
  • DLA-1184-1. Issued a security update for optipng fixing 1 CVE.
  • DLA-1185-1. Issued a security update for sam2p fixing 1 CVE.
  • DLA-1197-1. Issued a security update for sox fixing 7 CVE.
  • DLA-1198-1. Issued a security update for libextractor fixing 6 CVE. I also discovered that libextractor in buster/sid is still affected by more security issues and reported my findings as Debian bug #883528.
Misc
  • I packaged a new upstream release of osmo, a neat task manager and calendar application.
  • I prepared a security update for sam2p, which will be part of the next Jessie point release, and libspring-ldap-java. (DSA-4046-1)

Thanks for reading and see you next time.

Renata D'Avila: Creating a blog with pelican and Github pages

6 December, 2017 - 05:30

Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the Django Girls tutorial, which covers that and more.

This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).

The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.

I recommend that you initialize your repository with a README, with a .gitignore for Python and with a free software license. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.

Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:

$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git

And change to the new directory:

$ cd YOUR_USERNAME.github.io

Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":

$ git checkout -b source

Create the virtualenv with the Python3 version installed on your system.

On GNU/Linux systems, the command might go as:

$ python3 -m venv venv

or as

$ virtualenv --python=python3.5 venv

And activate it:

$ source venv/bin/activate

Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:

(venv)$ pip install pelican markdown ghp-import

Once that is done, you can start creating your blog using pelican-quickstart:

(venv)$ pelican-quickstart

Which will prompt us a series of questions. Before answering them, take a look at my answers below:

> Where do you want to create your new web site? [.] ./
> What will be the title of this web site? Renata's blog
> Who will be the author of this web site? Renata
> What will be the default language of this web site? [pt] en
> Do you want to specify a URL prefix? e.g., http://example.com   (Y/n) n
> Do you want to enable article pagination? (Y/n) y
> How many articles per page do you want? [10] 10
> What is your time zone? [Europe/Paris] America/Sao_Paulo
> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!**
> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n
> Do you want to upload your website using FTP? (y/N) n
> Do you want to upload your website using SSH? (y/N) n
> Do you want to upload your website using Dropbox? (y/N) n
> Do you want to upload your website using S3? (y/N) n
> Do you want to upload your website using Rackspace Cloud Files? (y/N) n
> Do you want to upload your website using GitHub Pages? (y/N) y
> Is this your personal page (username.github.io)? (y/N) y
Done. Your new project is available at /home/username/YOUR_USERNAME.github.io

About the time zone, it should be specified as TZ Time zone (full list here: List of tz database time zones).

Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

Title: My first post
Date: 2017-11-26 10:01
Modified: 2017-11-27 12:30
Category: misc
Tags: first, misc
Slug: My-first-post
Authors: Your name
Summary: What does your post talk about? Write here.

This is the *first post* from my Pelican blog. **YAY!**

Let's see how it looks?

Go to the terminal, generate the static files and start the server. To do that, use the following command:

(venv)$ make html && make serve

While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.

Pretty neat, right?

Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

STATIC_PATHS = ['images']

Save it. Go to your post and add the image this way:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

![Write here a good description for people who can't see the image]({filename}/images/IMAGE_NAME.jpg)

You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?

(venv)$ make html && make serve

One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

FEED_ALL_ATOM = 'feeds/all.atom.xml'
FEED_ALL_RSS = 'feeds/all.rss.xml'
AUTHOR_FEED_RSS = 'feeds/%s.rss.xml'
RSS_FEED_SUMMARY_ONLY = False

Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.

$ git add -A && git commit -a -m 'first commit' && git push --all

And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:

$ make github

You will be asked for your Github login and password again. And... voilà! Your new blog should be live on https://YOUR_USERNAME.github.io.

If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.

For tips on how to write your posts using Markdown, you should read the Daring Fireball Markdown guide.

To get other themes, I recommend you visit Pelican Themes.

This post was adapted from Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme. I hope it was somewhat useful for you.

Keith Packard: nuttx-scheme

6 December, 2017 - 05:25
Scheme For NuttX

Last fall, I built a tiny lisp interpreter for AltOS. That was fun, but had some constraints:

  • Yet another lisp-like language
  • Ran only on AltOS, not exactly a widely used operating system.

To fix the first problem, I decided to try and just implement scheme. The language I had implemented wasn't far off; it had lexical scoping and call-with-current-continuation after all. The rest is pretty simple stuff.

To fix the second problem, I ported the interpreter to NuttX. NuttX is a modest operating system for 8 to 32-bit microcontrollers with a growing community of developers.

I downloaded the most recent Scheme spec, the Revised⁷ Report, which is the 'small language' follow on to the contentious Revised⁶ Report.

Converting ao-lisp to ao-scheme

Reading through the spec, it was clear there were a few things I needed to deal with to provide something that looked like scheme:

  • quasiquote
  • syntax-rules
  • function names
  • boolean type

Quasiquote turned out to be fun -- the spec described it in terms of a set of list forms, so I hacked up the reader to convert the convenient syntax using ` , and ,@ into lists and wrote a macro to emit construction code from the generated lists.

Syntax-rules is a 'nicer' way to write macros, and I haven't implemented it yet. There's nothing it can do which the underlying full macros cannot, so I'm planning on just writing it in scheme rather than having a pile more C code.

Scheme as a separate boolean type, rather than using the empty list, (), for false, it uses #f and has everything else be 'true'. Adding that wasn't hard, just tedious as I had to work through any place that used boolean values and switch it to using #f or #t.

There were also a pile of random function name swaps and another bunch of new functions to write.

All in all, not a huge amount of work, and now the language looks a lot more like scheme.

Adding more number types

The original language had only integers, and those were only 14 bits wide. To make the language a bit more usable, I added 24 bit integers as well, along with 32-bit floats. Then I added automatic promotion between representations and the usual scheme tests for exactness. This added a bit to the footprint, and maybe I should make it optional.

Porting to NuttX

This was the easiest piece of the process -- NuttX offers a posix-like API, just like AltOS. Getting it to build was actually a piece of cake. The only part requiring new code was the lack of any command line editing or echo -- I ended up using readline to get that to work.

I was pleased that all of the language changes I made didn't significantly impact the footprint of the resulting system. I built NuttX for the stm32f4-discovery board, compiling in basic and then comparing with and without scheme:

Before:

$ size nuttx
   text    data     bss     dec     hex filename
 183037     172    4176  187385   2dbf9 nuttx

After:

$ size nuttx
   text    data     bss     dec     hex filename
 213197     188   22672  236057   39a19 nuttx

The increase in text includes 11kB of built-in lisp code, so that when the interpreter starts, you already have all of the necessary lisp code loaded that turns the bare interpreter into a full scheme system. That makes the core interpreter around 20kB of code, which is nice and compact (at least for scheme, I think).

The BSS space includes the heap; this can be set to any size you like. It would probably be good to have that allocated on the fly instead of used even when the interpreter isn't running.

Where's the Code

I've pushed the code here:

$ git clone git://keithp.com/git/apps
Future Work

There's more work to complete the language support; here's some tasks needing attention at some point:

  • No vectors or bytevectors
  • Characters are just numbers
  • No dynamic-wind or exceptions
  • No environments
  • No ports
  • No syntax-rules
  • No record types
  • No libraries
  • Heap allocated from BSS
A Sample Application!

Here's towers of hanoi in scheme for nuttx:

;
; Towers of Hanoi
;
; Copyright © 2016 Keith Packard <keithp@keithp.com>
;
; This program is free software; you can redistribute it and/or modify
; it under the terms of the GNU General Public License as published by
; the Free Software Foundation, either version 2 of the License, or
; (at your option) any later version.
;
; This program is distributed in the hope that it will be useful, but
; WITHOUT ANY WARRANTY; without even the implied warranty of
; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
; General Public License for more details.
;

                    ; ANSI control sequences

(define (move-to col row)
  (for-each display (list "\033[" row ";" col "H"))
  )

(define (clear)
  (display "\033[2J")
  )

(define (display-string x y str)
  (move-to x y)
  (display str)
  )

(define (make-piece num max)
                    ; A piece for position 'num'
                    ; is num + 1 + num stars
                    ; centered in a field of max *
                    ; 2 + 1 characters with spaces
                    ; on either side. This way,
                    ; every piece is the same
                    ; number of characters

  (define (chars n c)
    (if (zero? n) ""
      (+ c (chars (- n 1) c))
      )
    )
  (+ (chars (- max num 1) " ")
     (chars (+ (* num 2) 1) "*")
     (chars (- max num 1) " ")
     )
  )

(define (make-pieces max)
                    ; Make a list of numbers from 0 to max-1
  (define (nums cur max)
    (if (= cur max) ()
      (cons cur (nums (+ cur 1) max))
      )
    )
                    ; Create a list of pieces

  (map (lambda (x) (make-piece x max)) (nums 0 max))
  )

                    ; Here's all of the towers of pieces
                    ; This is generated when the program is run

(define towers ())

                    ; position of the bottom of
                    ; the stacks set at runtime
(define bottom-y 0)
(define left-x 0)

(define move-delay 25)

                    ; Display one tower, clearing any
                    ; space above it

(define (display-tower x y clear tower)
  (cond ((= 0 clear)
     (cond ((not (null? tower))
        (display-string x y (car tower))
        (display-tower x (+ y 1) 0 (cdr tower))
        )
           )
     )
    (else 
     (display-string x y "                    ")
     (display-tower x (+ y 1) (- clear 1) tower)
     )
    )
  )

                    ; Position of the top of the tower on the screen
                    ; Shorter towers start further down the screen

(define (tower-pos tower)
  (- bottom-y (length tower))
  )

                    ; Display all of the towers, spaced 20 columns apart

(define (display-towers x towers)
  (cond ((not (null? towers))
     (display-tower x 0 (tower-pos (car towers)) (car towers))
     (display-towers (+ x 20) (cdr towers)))
    )
  )

                    ; Display all of the towers, then move the cursor
                    ; out of the way and flush the output

(define (display-hanoi)
  (display-towers left-x towers)
  (move-to 1 23)
  (flush-output)
  (delay move-delay)
  )

                    ; Reset towers to the starting state, with
                    ; all of the pieces in the first tower and the
                    ; other two empty

(define (reset-towers len)
  (set! towers (list (make-pieces len) () ()))
  (set! bottom-y (+ len 3))
  )

                    ; Move a piece from the top of one tower
                    ; to the top of another

(define (move-piece from to)

                    ; references to the cons holding the two towers

  (define from-tower (list-tail towers from))
  (define to-tower (list-tail towers to))

                    ; stick the car of from-tower onto to-tower

  (set-car! to-tower (cons (caar from-tower) (car to-tower)))

                    ; remove the car of from-tower

  (set-car! from-tower (cdar from-tower))
  )

                    ; The implementation of the game

(define (_hanoi n from to use)
  (cond ((= 1 n)
     (move-piece from to)
     (display-hanoi)
     )
    (else
     (_hanoi (- n 1) from use to)
     (_hanoi 1 from to use)
     (_hanoi (- n 1) use to from)
     )
    )
  )

                    ; A pretty interface which
                    ; resets the state of the game,
                    ; clears the screen and runs
                    ; the program

(define (hanoi len)
  (reset-towers len)
  (clear)
  (display-hanoi)
  (_hanoi len 0 1 2)
  #t
  )

Jonathan Dowland: back on the Linux desktop

5 December, 2017 - 22:35

As forecast, I've switched from Mac back to Linux on the Desktop. I'm using a work-supplied Thinkpad T470s which is a nice form-factor machine (the the T450s was the first Thinkpad to widen my perspective away from just looking at the X series).

I've installed Debian to get started and ended up with GNOME 3 as the desktop (I was surprised to not be prompted for a choice in the installer, but on reflection that makes sense, I did a non-networked installed from the GNOME-flavour of the live DVD). So for the time being I'm going to stick to GNOME 3 and see what's new/better/worse than last time, but once my replacement SSD arrives I can revisit.

I haven't made much progress on the sticking points I identified in my last post. I'm hoping to get 1pass up and running in the interim to read my 1Password DB so I can get by until I've found a replacement password manager that I like.

Most of my desktop configuration steps I have captured in some Ansible playbooks. I'm looking at Ansible after a long break from using puppet, and there's things I like and things I don't. I've also been exploring ownCloud for personal file sharing and despite a couple of warning signs (urgh PHP, official Debian package was dropped) I'm finding it really useful, in particular for sharing stuff with family. I might write more about both of those later.

Joachim Breitner: Finding bugs in Haskell code by proving it

5 December, 2017 - 21:17

Last week, I wrote a small nifty tool called bisect-binary, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the Intervals.hs module, is a data structure for “subsets of a file” represented as a sorted list of intervals:

data Interval = I { from :: Offset, to :: Offset }
newtype Intervals = Intervals [Interval]

The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done:

intersect :: Intervals -> Intervals -> Intervals
intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2
  where
    go _ [] = []
    go [] _ = []
    go (i1:is1) (i2:is2)
        -- reorder for symmetry
        | to i1 < to i2 = go (i2:is2) (i1:is1)
        -- disjoint
        | from i1 >= to i2 = go (i1:is1) is2
        -- subset
        | to i1 == to i2 = I f' (to i2) : go is1 is2
        -- overlapping
        | otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2
      where f' = max (from i1) (from i2)

But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood...

Now available: Formal Verification for Haskell

Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq.

We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies.

Massaging the translation

Often, hs-to-coq translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify three so-called edits:

  • The Haskell code uses Intervals both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line

    rename value Intervals.Intervals = ival

    changes the constructor name to ival.

  • I use the Int64 type in the Haskell code. The Coq version of Haskell’s base library that comes with hs-to-coq does not support that yet, so I change that via

    rename type GHC.Int.Int64 = GHC.Num.Int

    to the normal Int type, which itself is mapped to Coq’s Z type. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of Int64. Since none of my code does arithmetic, only comparisons, I am fine with that.

  • The biggest hurdle is the recursion of the local go functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the go above is not. For example, in the first case, the arguments to go are simply swapped. It is very much not obvious why this is not an infinite loop.

    I can specify a termination measure, i.e. a function that takes the arguments xs and ys and returns a “size” of type nat that decreases in every call: Add the lengths of xs and ys, multiply by two and add one if the the first interval in xs ends before the first interval in ys.

    If the problematic function were a top-level function I could tell hs-to-coq about this termination measure and it would use this information to define the function using Program Fixpoint.

    Unfortunately, go is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make go a top-level function, but in this case I did not want to change the Haskell code.

    Another way out offered by hs-to-coq is to translate the recursive function using an axiom unsafeFix : forall a, (a -> a) -> a. This looks scary, but as I explain in the previous blog post, this axiom can be used in a safe way.

    I should point out it is my dissenting opinion to consider this a valid verification approach. The official stand of the hs-to-coq author team is that using unsafeFix in the verification can only be a temporary state, and eventually you’d be expected to fix (heh) this, for example by moving the functions to the top-level and using hs-to-coq’s the support for Program Fixpoint.

With these edits in place, hs-to-coq splits out a faithful Coq copy of my Haskell code.

Time to prove things

The rest of the work is mostly straight-forward use of Coq. I define the invariant I expect to hold for these lists of intervals, namely that they are sorted, non-empty, disjoint and non-adjacent:

Fixpoint goodLIs (is : list Interval) (lb : Z) : Prop :=
  match is with
    | [] => True
    | (I f t :: is) => (lb <= f)%Z /\ (f < t)%Z /\ goodLIs is t
   end.

Definition good is := match is with
  ival is => exists n, goodLIs is n end.

and I give them meaning as Coq type for sets, Ensemble:

Definition range (f t : Z) : Ensemble Z :=
  (fun z => (f <= z)%Z /\ (z < t)%Z).

Definition semI (i : Interval) : Ensemble Z :=
  match i with I f t => range f t end.

Fixpoint semLIs (is : list Interval) : Ensemble Z :=
  match is with
    | [] => Empty_set Z
    | (i :: is) => Union Z (semI i) (semLIs is)
end.

Definition sem is := match is with
  ival is => semLIs is end.

Now I prove for every function that it preserves the invariant and that it corresponds to the, well, corresponding function, e.g.:

Lemma intersect_good : forall (is1 is2 : Intervals),
  good is1 -> good is2 -> good (intersect is1 is2).
Proof. … Qed.

Lemma intersection_spec : forall (is1 is2 : Intervals),
  good is1 -> good is2 ->
  sem (intersect is1 is2) = Intersection Z (sem is1) (sem is2).
Proof. … Qed.

Even though I punted on the question of termination while defining the functions, I do not get around that while verifying this, so I formalize the termination argument above

Definition needs_reorder (is1 is2 : list Interval) : bool :=
  match is1, is2 with
    | (I f1 t1 :: _), (I f2 t2 :: _) => (t1 <? t2)%Z
    | _, _ => false
  end.

Definition size2 (is1 is2 : list Interval) : nat :=
  (if needs_reorder is1 is2 then 1 else 0) + 2 * length is1 + 2 * length is2.

and use it in my inductive proofs.

As I intend this to be a write-once proof, I happily copy’n’pasted proof scripts and did not do any cleanup. Thus, the resulting Proof file is big, ugly and repetitive. I am confident that judicious use of Coq tactics could greatly condense this proof.

Using Program Fixpoint after the fact?

This proofs are also an experiment of how I can actually do induction over a locally defined recursive function without too ugly proof goals (hence the line match goal with [ |- context [unsafeFix ?f _ _] ] => set (u := f) end.). One could improve upon this approach by following these steps:

  1. Define copies (say, intersect_go_witness) of the local go using Program Fixpoint with the above termination measure. The termination argument needs to be made only once, here.

  2. Use this function to prove that the argument f in go = unsafeFix f actually has a fixed point:

    Lemma intersect_go_sound:

    f intersect_go_witness = intersect_go_witness

    (This requires functional extensionality). This lemma indicates that my use of the axioms unsafeFix and unsafeFix_eq are actually sound, as discussed in the previous blog post.

  3. Still prove the desired properties for the go that uses unsafeFix, as before, but using the functional induction scheme for intersect_go! This way, the actual proofs are free from any noisy termination arguments.

    (The trick to define a recursive function just to throw away the function and only use its induction rule is one I learned in Isabelle, and is very useful to separate the meat from the red tape in complex proofs. Note that the induction rule for a function does not actually mention the function!)

Maybe I will get to this later.

Update: I experimented a bit in that direction, and it does not quite work as expected. In step 2 I am stuck because Program Fixpoint does not create a fixpoint-unrolling lemma, and in step 3 I do not get the induction scheme that I was hoping for. Both problems would not exist if I use the Function command, although that needs some tickery to support a termination measure on multiple arguments. The induction lemma is not quite as polished as I was hoping for, so he resulting proof is still somewhat ugly, and it requires copying code, which does not scale well.

Efforts and gains

I spent exactly 7 hours working on these proofs, according to arbtt. I am sure that writing these functions took me much less time, but I cannot calculate that easily, as they were originally in the Main.hs file of bisect-binary.

I did find and fix three bugs:

  • The intersect function would not always retain the invariant that the intervals would be non-empty.
  • The subtract function would prematurely advance through the list intervals in the second argument, which can lead to a genuinely wrong result. (This occurred twice.)

Conclusion: Verification of Haskell code using Coq is now practically possible!

Final rant: Why is the Coq standard library so incomplete (compared to, say, Isabelle’s) and requires me to prove so many lemmas about basic functions on Ensembles?

Reproducible builds folks: Reproducible Builds: Weekly report #136

5 December, 2017 - 21:10

Here's what happened in the Reproducible Builds effort between Sunday, November 26 and Saturday, December 2, 2017:

Media coverage Arch Linux imap key leakage

A security issue was found in the imap package in Arch Linux thanks to the reproducible builds effort in that distribution.

Due to a hardcoded key-generation routine in the build() step of imap's PKGBUILD (the standard packaging file for Arch Linux packages), a default secret key was generated and leaked on all imap installations. This was prompty reviewed, confirmed and fixed by the package maintainers.

This mirrors similar security issues found in Debian, such as #833885.

Debian packages reviewed and fixed, and bugs filed

In addition, 73 FTBFS bugs were detected and reported by Adrian Bunk.

Reviews of unreproducible Debian packages

83 package reviews have been added, 41 have been updated and 33 have been removed in this week, adding to our knowledge about identified issues.

1 issue type was updated:

LEDE / OpenWrt packages updates: diffoscope development reprotest development

Version 0.7.4 was uploaded to unstable by Ximin Luo. It included contributions already covered by posts of the previous weeks as well as new ones from:

reproducible-website development tests.reproducible-builds.org Misc.

This week's edition was written by Alexander Couzens, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Santiago Torres-Arias, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Petter Reinholdtsen: Is the short movie «Empty Socks» from 1927 in the public domain or not?

5 December, 2017 - 18:30

Three years ago, a presumed lost animation film, Empty Socks from 1927, was discovered in the Norwegian National Library. At the time it was discovered, it was generally assumed to be copyrighted by The Walt Disney Company, and I blogged about my reasoning to conclude that it would would enter the Norwegian equivalent of the public domain in 2053, based on my understanding of Norwegian Copyright Law. But a few days ago, I came across a blog post claiming the movie was already in the public domain, at least in USA. The reasoning is as follows: The film was released in November or Desember 1927 (sources disagree), and presumably registered its copyright that year. At that time, right holders of movies registered by the copyright office received government protection for there work for 28 years. After 28 years, the copyright had to be renewed if the wanted the government to protect it further. The blog post I found claim such renewal did not happen for this movie, and thus it entered the public domain in 1956. Yet someone claim the copyright was renewed and the movie is still copyright protected. Can anyone help me to figure out which claim is correct? I have not been able to find Empty Socks in Catalog of copyright entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures available from the University of Pennsylvania, neither in page 45 for the first half of 1955, nor in page 119 for the second half of 1955. It is of course possible that the renewal entry was left out of the printed catalog by mistake. Is there some way to rule out this possibility? Please help, and update the wikipedia page with your findings.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Thomas Lange: FAI.me build server improvements

5 December, 2017 - 03:59

Only one week ago, I've announced the FAI.me build service for creating your own installation images. I've got some feedback and people like to have root login without a password but using a ssh key. This feature is now available. You can upload you public ssh key which will be installed as authorized_keys for the root account.

You can now also download the configuration space that is used on the installation image and you can get the whole log file from the fai-mirror call. This command creates the partial package mirror. The log file helps you debugging if you add some packages which have conflicts on other packages, or if you misspelt a package name.

FAI.me

Joey Hess: new old thing

5 December, 2017 - 03:50

This branch came from a cedar tree overhanging my driveway.

It was fun to bust this open and shape it with hammer and chisels. My dad once recommended learning to chisel before learning any power tools for wood working.. so I suppose this is a start.

Some tung oil and drilling later, and I'm very pleased to have a nice place to hang my cast iron.

Holger Levsen: 20171204-qubes-mirage-firewall

4 December, 2017 - 21:37
On using QubesOS MirageOS firewall

So I'm lucky to attend the 4th MirageOS hack retreat in Marrakesh this week, where I learned to build and use qubes-mirage-firewall, which is a MirageOS based (system) firewall for Qubes OS. The main visible effect is that this unikernel only needs 32 megabytes of memory, while a Debian (or Fedora) based firewall systems needs half a gigabyte. It's also said to be more secure, but I have not verified that myself

In the spirit of avoiding overhead I decided not to build with docker as the qubes-mirage-firewall's README.md suggests, but rather use a base Debian stretch system. Here's how to build natively:

sudo apt install git opam aspcud curl debianutils m4 ncurses-dev perl pkg-config time

git clone https://github.com/talex5/qubes-mirage-firewall
cd qubes-mirage-firewall/
opam init
# the next line is super useful if there is bad internet connectivity but you happen to have access to a local mirror
# opam repo add local http://10.0.0.2:8080
opam switch 4.04.2
eval `opam config env`
## in there:
opam install -y vchan xen-gnt mirage-xen-ocaml mirage-xen-minios io-page mirage-xen mirage mirage-nat mirage-qubes netchannel
mirage configure -t xen
make depend
make tar

Then follow the instructions in the README.md and switch some AppVMs to it, and then make it the default and shutdown the old firewall, if you are happy with the results, which currently I'm not sure I am because it doesn't allow updating template VMs...

Update: qubes-mirage-firewall allows this. Just the crashed qubes-updates-proxy service in sys-net prevented it, but that's another bug elsewhere.

I also learned that it builds reproducibly given the same build path and ignoring the issue of timestamps in the generated tarball, IOW, the unikernel (and the 3 other files) inside the tarball is reproducible. And I still need to compare a docker build with a build done the above way & and I really don't like having to edit the firewalls rules.ml file and then rebuilding it. More on this in another post later, hopefully.

Oh, I didn't mention it and won't say more here, but this hack retreat and it's organisation is marvellous! Many thanks to everyone here!

Rapha&#235;l Hertzog: My Free Software Activities in November 2017

4 December, 2017 - 00:52

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (16 commits to the security tracker).

I prepared and released DLA-1171-1 on libxml-libxml-perl.

I prepared a new update for simplesamlphp (1.9.2-1+deb7u1) fixing 6 CVE. I did not release any DLA yet since I was not able to test the updated package yet. I’m hoping that the the current maintainer can do it since he wanted to work on the update a few months ago.

Distro Tracker

Distro Tracker has seen a high level of activity in the last month. Ville Skyttä continued to contribute a few patches, he helped notably to get rid of the last blocker for a switch to Python 3.

I then worked with DSA to get the production instance (tracker.debian.org) upgraded to stretch with Python 3.5 and Django 1.11. This resulted in a few regressions related to the Python 3 switch (despite the large number of unit tests) that I had to fix.

In parallel Pierre-Elliott Bécue showed up on the debian-qa mailing list and he started to contribute. I have been exchanging with him almost daily on IRC to help him improve his patches. He has been very responsive and I’m looking forward to continue to cooperate with him. His first patch enabled the use “src:” and “bin:” prefix in the search feature to specify if we want to lookup among source packages or binary packages.

I did some cleanup/refactoring work after the switch of the codebase to Python 3 only.

Misc Debian work

Sponsorship. I sponsored many new packages: python-envparse 0.2.0-1, python-exotel 0.1.5-1, python-aws-requests-auth 0.4.1-1, pystaticconfiguration 0.10.3-1, python-jira 1.0.10-1, python-twilio 6.8.2-1, python-stomp 4.1.19-1. All those are dependencies for elastalert 0.1.21-1 that I also sponsored.

I sponsored updates for vboot-utils 0~R63-10032.B-2 (new upstream release for openssl 1.1 compat), aircrack-ng 1:1.2-0~rc4-4 (introducing airgraph-ng package) and asciidoc 8.6.10-2 (last upstream release, tool is deprecated).

Debian Installer. I submitted a few patches a while ago to support finding ISO images in LVM logical volumes in the hd-media installation method. Colin Watson reviewed them and made a few suggestions and expressed a few concerns. I improved my patches to take into account his suggestions and I resolved all the problems he pointed out. I then committed everything to the respective git repositories (for details review #868848, #868859, #868900, #868852).

Live Build. I merged 3 patches for live-build (#879169, #881941, #878430).

Misc. I uploaded Django 1.11.7 to stretch-backports. I filed an upstream bug on zim for #881464.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Noah Meyerhans: On the demise of Linux Journal

3 December, 2017 - 09:54

Lwn, Slashdot, and many others have marked the recent announcement of Linux Journal's demise. I'll take this opportunity to share some of my thoughts, and to thank the publication and its many contributors for their work over the years.

I think it's probably hard for younger people to imagine what the Linux world was like 20 years ago. Today, it's really not an exaggeration to say that the Internet as we know it wouldn't exist at all without Linux. Almost every major Internet company you can think of runs almost completely on Linux. Amazon, Google, Facebook, Twitter, etc, etc. All Linux. In 1997, though, the idea of running a production workload on Linux was pretty far out there.

I was in college in the late 90's, and worked for a time at a small Cambridge, Massachusetts software company. The company wrote a pretty fancy (and expensive!) GUI builder targeting big expensive commercial UNIX platforms like Solaris, HP/UX, SGI IRIX, and others. At one point a customer inquired about the availability of our software on Linux, and I, as an enthusiastic young student, got really excited about the idea. The company really had no plans to support Linux, though. I'll never forget the look of disbelief on a company exec's face as he asked "$3000 on a Linux system?"

Throughout this period, on my lunch breaks from work, I'd swing by the now defunct Quantum Books. One of the monthly treats was a new issue of Linux Journal on the periodicals shelf. In these issues, I learned that more forward thinking companies actually were using Linux to do real work. An article entitled "Linux Sinks the Titanic" described how Hollywood deployed hundreds(!) of Linux systems running custom software to generate the special effects for the 1997 movie Titanic. Other articles documented how Linux was making inroads at NASA and in the broader scientific community. Even the ads were interesting, as they showed increasing commercial interest in Linux, both on the hardware (HyperMicro, VA Research, Linux Hardware Solutions, etc) and software (CDE, Xi Graphics) fronts.

The software world is very different now than it was in 1997. The media world is different, too. Not only is Linux well established, it's pretty much the dominant OS on the planet. When Linux Journal reported in the late 90's that Linux was being used for some new project, that was news. When they documented how to set up a Linux system to control some new piece of hardware or run some network service, you could bet that they filled a gap that nobody else was working on. Today, it's no longer news that a successful company is using Linux in production. Nor is it surprising that you can run Linux on a small embedded system; in fact it's quite likely that the system shipped with Linux pre-installed. On the media side, it used to be valuable to have everything bundled in a glossy, professionally produced archive published on a regular basis. Today, at least in the Linux/free software sphere, that's less important. Individual publication is easy on the Internet today, and search engines are very good at ensuring that the best content is the most discoverable content. The whole Internet is basically one giant continuously published magazine.

It's been a long time since I paid attention to Linux Journal, so from a practical point of view I can't honestly say that I'll miss it. I appreciate the role it played in my growth, but there are so many options for young people today entering the Linux/free software communities that it appears that the role is no longer needed. Still, the termination of this magazine is a permanent thing, and I can't help but worry that there's somebody out there who might thrive in the free software community if only they had the right door open before them.

Thomas Goirand: There’s cloud, and it can even be YOURS on YOUR computer

3 December, 2017 - 05:00

Each time I see the FSFE picture, just like on Daniel’s last post to planet.d.o, where it says:

“There is NO CLOUD, just other people’s computers”

it makes me so frustrated. There’s such a thing as private cloud, setup on your own set of servers. I’ve been working on delivering OpenStack to Debian for the last 6 years and a half, motivated exactly to fix this issue: I refuse that the only cloud people could use would be a closed source solution like GCE, AWS or Azure. The FSFE (and the FSF) completely dismissing this work is more than annoying: it is counter productive. Not only the FSFE shouldn’t pull anyone away from the cloud, but it should push for the public to choose cloud providers using free software like OpenStack.

The openstack.org market place lists 23 public cloud providers using OpenStack, so there is now no excuse to use any other type of cloud: for sure, there’s one where you need it. If you use a free software solution like OpenStack, then the question if you’re running on your own hardware, on some rented hardware (on which you deployed OpenStack yourself), or on someone else’s OpenStack deployment is just a practical one, on which you can always back-up quickly. That’s one of the very reason why one should deploy on the cloud: so that it’s possible to redeploy quickly on another cloud provider, or even on your own private cloud. This gives you more freedom than you ever had, because it makes you not dependent anymore on the hosting company you’ve selected: switching provider is just the mater of launching a script. The reality is that neither the FSFE or RMS understand all of this. Please don’t dive into the FSFE very wrong message.

Steve Kemp: BlogSpam.net repository cleanup, and email-changes.

3 December, 2017 - 05:00

I've shuffled around all the repositories which are associated with the blogspam service, such that they're all in the same place and refer to each other correctly:

Otherwise I've done a bit of tidying up on virtual machines, and I'm just about to drop the use of qpsmtpd for handling my email. I've used the (perl-based) qpsmtpd project for many years, and documented how my system works in a "book":

I'll be switching to pure exim4-based setup later today, and we'll see what that does. So far today I've received over five thousand spam emails:

  steve@ssh /spam/today $ find . -type f | wc -l
  5731

Looking more closely though over half of these rejections are "dictionary attacks", so they're not SPAM I'd see if I dropped the qpsmtpd-layer. Here's a sample log entry (for a mail that was both rejected at SMTP-time by qpsmtpd and archived to disc in case of error):

   {"from":"<clzzgiadqb@ics.uci.edu>",
    "helo":"adrian-monk-v3.ics.uci.edu",
    "reason":"Mail for juha not accepted at steve.fi",
    "filename":"1512284907.P26574M119173Q0.ssh.steve.org.uk.steve.fi",
    "subject":"Viagra Professional. Beyond compare. Buy at our shop.",
    "ip":"2a00:6d40:60:814e::1",
    "message-id":"<p65NxDXNOo1b.cdD3s73osVDDQ@ics.uci.edu>",
    "recipient":"juha@steve.fi",
    "host":"Unknown"}

I suspect that with procmail piping to crm114, and a beefed up spam-checking configuration for exim4 I'll not see a significant difference and I'll have removed something non-standard. For what it is worth over 75% of the remaining junk which was rejected at SMTP-time has been rejected via DNS-blacklists. So again exim4 will take care of that for me.

If it turns out that I'm getting inundated with junk-mail I'll revert this, but I suspect that it'll all be fine.

Thorsten Alteholz: My Debian Activities in November 2017

2 December, 2017 - 23:55

FTP master

As you might have read elsewhere, I am no longer an FTP assistant. I am very delighted about my new delegation as FTP master.

So this month I almost doubled the number of accepted packages to 385 packages and rejected 60 uploads. The overall number of packages that got accepted this month was 448.

Debian LTS

This was my forty first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 13h. During that time I did LTS uploads of:

  • [DLA 1188-1] libxml2 security update one CVE
  • [DLA 1191-1] python-werkzeug security update one CVE
  • [DLA 1192-1] libofx security update two CVEs
  • [DLA 1195-1] curl security update one CVE
  • [DLA 1194-1] libxml2 security update two CVEs

I also took care of an rsync issue and continued to work on wireshark.

Other stuff

During November I uploaded new upstream versions of …

I also did uploads of …

  • openoverlayrouter to change the source package Section: and fix some problems in Ubuntu
  • duktape to not only provide a shared library but also a pkg-config file
  • astronomical-almanac to make Helmut happy and fix a FTCBFS where he also provided the patch

Last month I wrote about apcupsd as the DOPOM of October. Unfortunately in November was the next power outage due to some malfunction in a transformer station. I never would have guessed that such a malfunction can do so much harm within the power grid. Anyway, the power was back after 31 minutes and my batteries would have lasted 34 minutes before turning off all computer. At least my spec was correct :-).

The DOPOM for this month has been dateutils.

As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don’t hestitate, start to squash :-).

Last but not least I sponsored the upload of evqueue-core.

Martin-&#201;ric Racine: dbus, rsyslogd, systemd: Which one is the culprit?

2 December, 2017 - 22:03
I have been facing this issue since a few weeks on testing. For many weeks, it prevented upgrading dbus to the latest version that trickled to Testing. Having manually force-installed dbus via the Recovery Mode's shell, I then ran into this issue:


This is a nasty one, since it also prevents performing a clean poweroff. That systemd-journald line about getting a timeout while attempting to connect to the Watchdog keeps on showing ad infinitum.

What am I missing?

Daniel Pocock: Hacking with posters and stickers

2 December, 2017 - 03:27

The FIXME.ch hackerspace in Lausanne, Switzerland has started this weekend's VR Hackathon with a somewhat low-tech 2D hack: using the FSFE's Public Money Public Code stickers in lieu of sticky tape to place the NO CLOUD poster behind the bar.

Get your free stickers and posters

FSFE can send you these posters and stickers too.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้