Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 45 min 7 sec ago

Vincent Fourmond: Release 2.2 of QSoas

9 hours 16 min ago
The new release of QSoas is finally ready ! It brings in a lot of new features and improvements, notably greatly improved memory use for massive multifits, a fit for linear (in)activation processes (the one we used in Fourmond et al, Nature Chemistry 2014), a new way to transform "numbers" like peak position or stats into new datasets and even SVG output ! Following popular demand, it also finally brings back the peak area output in the find-peaks command (and the other, related commands) ! You can browse the full list of changes there.

The new release can be downloaded from the downloads page.
Freely available binary images for QSoas 1.0In addition to the new release, we are now releasing the binary images for MacOS and Windows for the release 1.0. They are also freely available for download from the downloads page.
About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

Julien Danjou: On blog migration

11 hours 16 min ago

I've started my first Web page in 1998 and one could say that it evolved quite a bit in the meantime. From a Frontpage designed Web site with frames, it evolved to plain HTML files. I've started blogging in 2003, though the archives of this blog only gets back to 2007. Truth is, many things I wrote in the first years were short (there were no Twitter) and not that relevant nowadays. Therefore, I never migrated them along the road of the many migrations that site had.

The last time I switched this site engine was in 2011, were I switched from Emacs Muse (and my custome muse-blog.el extension) to Hyde, a static Web site generator written in Python.

That taught me a few things.

First, you can't really know for sure which project will be a ghost in 5 years. I had no clue back then that Hyde author would lose interest and struggle passing the maintainership to someone else. The community was not big but it existed. Betting on a horse is part skill and part chance. My skills were probably lower seven years ago and I also may have had bad luck.

Secondly, maintaining a Web site is painful. I used to blog more regularly a few years ago, as the friction of using a dynamic blog engine was lower than spawning my deprecated static engine. Knowing that it needs 2 minutes to generate a static Web site really makes it difficult to compose and see the result at the same time without losing patience. It took me a few years to decide it was time to invest in the migration. I just jumped from Hyde to Ghost, hosted on their Pro engine as I don't want to do any maintenance. Let's be honest, I've no will to inflict myself the maintenance of a JavaScript blogging engine.

The positive side is that this is still Markdown based, so I the migration job was not so painful. Ghost offers a REST API which allow to manipulate most of the content. It works fine, and I was able to leverage the Python ghost-client to write a tiny migration script to migrate every post.

I am looking forward to share most of the things that I work on during the next months. I really enjoyed reading contents of great hackers those last years, and I learnt ton of things by reading the adventure of smarter engineers.

It might be my time to share.

Jeremy Bicha: gksu is dead. Long live PolicyKit

12 hours 33 min ago

Today, gksu was removed from Debian unstable. It was already removed 2 months ago from Debian Testing (which will eventually be released as Debian 10 “Buster”).

It’s not been decided yet if gksu will be removed from Ubuntu 18.04 LTS. There is one blocker bug there.


Petter Reinholdtsen: Facebooks ability to sell your personal information is the real Cambridge Analytica scandal

21 March, 2018 - 22:30

So, Cambridge Analytica is getting some well deserved criticism for (mis)using information it got from Facebook about 50 million people, mostly in the USA. What I find a bit surprising, is how little criticism Facebook is getting for handing the information over to Cambridge Analytica and others in the first place. And what about the people handing their private and personal information to Facebook? And last, but not least, what about the government offices who are handing information about the visitors of their web pages to Facebook? No-one who looked at the terms of use of Facebook should be surprised that information about peoples interests, political views, personal lifes and whereabouts would be sold by Facebook.

What I find to be the real scandal is the fact that Facebook is selling your personal information, not that one of the buyers used it in a way Facebook did not approve when exposed. It is well known that Facebook is selling out their users privacy, but a scandal nevertheless. Of course the information provided to them by Facebook would be misused by one of the parties given access to personal information about the millions of Facebook users. Collected information will be misused sooner or later. The only way to avoid such misuse, is to not collect the information in the first place. If you do not want Facebook to hand out information about yourself for the use and misuse of its customers, do not give Facebook the information.

Personally, I would recommend to completely remove your Facebook account, and take back some control of your personal information. According to The Guardian, it is a bit hard to find out how to request account removal (and not just 'disabling'). You need to visit a specific Facebook page and click on 'let us know' on that page to get to the real account deletion screen. Perhaps something to consider? I would not trust the information to really be deleted (who knows, perhaps NSA, GCHQ and FRA already got a copy), but it might reduce the exposure a bit.

If you want to learn more about the capabilities of Cambridge Analytica, I recommend to see the video recording of the one hour talk Paul-Olivier Dehaye gave to NUUG last april about Data collection, psychometric profiling and their impact on politics.

And if you want to communicate with your friends and loved ones, use some end-to-end encrypted method like Signal or Ring, and stop sharing your private messages with strangers like Facebook and Google.

Iustin Pop: Hakyll basics

21 March, 2018 - 09:00

As part of my migration to Hakyll, I had to spend quite a bit time understanding how it works before I became somewhat “at-home” with it. There are many posts that show “how to do x”, but not so many that explain its inner workings. Let me try to fix that: at its core, Hakyll is nothing else than a combination of make and m4 all in one. Simple, right? Let’s see :)

Note: in the following, basic proficiency with Haskell is assumed.

Monads and data types Rules

The first area (the make equivalent), more precisely the Rules monad, concerns itself with the rules for mapping source files into output files, or creating output files from scratch.

Key to this mapping is the concept of an Identifier, which is name in an abstract namespace. Most of the time—e.g. for all the examples in the upstream Hakyll tutorial—this identifier actually maps to a real source file, but this is not required; you can create an identifier from any string value.

The similarity, or relation, to file paths manifests in two ways:

  • the Identifier data type, although opaque, is internally implemented as a simple data type consisting of a file path and a “version”; the file path here points to the source file (if any), while the version is rather a variant of the item (not a numeric version!).
  • if the identifier has been included in a rule, it will have an output file (in the Compiler monad, via getRoute).

In effect, the Rules monad is all about taking source files (as identifiers) or creating them from scratch, and mapping them to output locations, while also declaring how to transform—or create—the contents of the source into the output (more on this later). Anyone can create an identifier value via fromFilePath, but “registering” them into the rules monad is done by one of:

  • matching input files, via match, or matchMetadata, which takes a Pattern that describes, well, matching source files (on the file-system):

    match :: Pattern -> Rules () -> Rules ()
  • creating an abstract identifier, via create:

    create :: [Identifier] -> Rules () -> Rules ()

Note: I’m probably misusing the term “registered” here. It’s not the specific value that is registered, but the identifier’s file path. Once this string value has been registered, one can use a different identifier value with a similar string (value) in various function calls.

Note: whether we use match or create doesn’t matter; only the actual values matter. So a match "" is equivalent to create [""], match here takes the list of identifiers from the file-system, but does not associated them to the files themselves—it’s just a way to get the list of strings.

The second argument to the match/create calls is another rules monad, in which we’re processing the identifiers and tell how to transform them.

This transformation has, as described, two aspects: how to map the file path to an output path, via the Rules data type, and how to compile the body, in the Compiler monad.

Name mapping

The name mapping starts with the route call, which lifts the routes into the rules monad.

The routing has the usual expected functionality:

  • idRoute :: Routes, which maps 1:1 the input file name to the output one.
  • setExtension :: String -> Routes, which changes the extension of the filename, or sets it (if there wasn’t any).
  • constRoute :: FilePath -> Routes, which is special in that it will result in the same output filename, which is obviously useful only for rules matching a single identifier.
  • and a few more options, like building the route based on the identifier (customRoute), building it based on metadata associated to the identifier (metadataRoute), composing routes, match-and-replace, etc.

All in all, routes offer all the needed functionality for mapping.

Note that how we declare the input identifier and how we compute the output route is irrelevant, what matters is the actual values. So for an identifier with name (file path), route idRoute is equivalent to constRoute "".


Slightly into more interesting territory here, as we’re moving beyond just file paths :) Lifting a compiler into the routes monad is done via the compile function:

compile :: (Binary a, Typeable a, Writable a) => Compiler (Item a) -> Rules ()

The Compiler monad result is an Item a which is just and identifier with a body (of type a). This type variable a means we can return any Writable item. Many of the compiler functions work with/return String, but the flexibility to use other types is there.

The functionality in this module revolves around four topics:

The current identifier

First the very straightforward functions for the identifier itself:

  • getUnderlying :: Compiler Identifier, just returns the identifier
  • getUnderlyingExtension :: Compiler String, returns the extension

And the for the body (data) of the identifier (mostly copied from the haddock of the module):

  • getResourceBody :: Compiler (Item String): returns the full contents of the matched source file as a string, but without metadata preamble, if there was one.
  • getResourceString :: Compiler (Item String), returns the full contents of the matched source file as a string.
  • getResourceLBS :: Compiler (Item ByteString), equivalent to the above but as lazy bytestring.
  • getResourceFilePath :: Compiler FilePath, returns the file path of the resource we are compiling.

More or less, these return the data to enable doing arbitrary things to it, and are at the cornerstone of a static site compiler. One could implement a simple “copy” compiler by doing just:

match "*.html" $ do
  -- route to the same path, per earlier explanation.
  route idRoute
  -- the compiler just returns the body of the source file.
  compile getResourceLBS

All the other functions in the module work on arbitrary identifiers.


I’m used to Yesod and its safe routes functionality. Hakyll has something slightly weaker, but with programmer discipline can allow similar levels of I know this will point to the right thing (and maybe correct escaping as well). Enter the:

getRoute :: Identifier -> Compiler (Maybe FilePath)

function which I alluded to earlier, and which—either for the current identifier or another identifier—returns the destination file path, which is useful for composing links (as in HTML links) to it.

For example, instead of hard-coding the path to the archive page, as /archive.html, one can instead do the following:

let archiveId = "archive.html"

create [archiveId] $ do
  -- build here the archive page

-- later in the index page
create "index.html" $ do
  compile $ do
    -- compute the actual url:
    archiveUrl <- toUrl <$> getRoute archiveId
    -- then use it in the creation of the index.html page

The reuse of archiveId above ensures that if the actual path to the archive page changes (renames, site reorganisation, etc.), then all the links to it (assuming, again, discipline of not hard-coding them) are automatically pointing to the right place.

Working with other identifiers

Getting to the interesting aspect now. In the compiler monad, one can ask for any other identifier, whether it was already loaded/compiled or not—the monad takes care of tracking dependencies/compiling automatically/etc.

There are two main functions:

  • load :: (Binary a, Typeable a) => Identifier -> Compiler (Item a), which returns a single item, and
  • loadAll :: (Binary a, Typeable a) => Pattern -> Compiler [Item a], which return a list of items, based on the same patterns used in the rules monad.

If the identifier/pattern requested do not match actual identifiers declared in the “parent” rules monad, then these calls will fail (as in monadic fail).

The use of other identifiers in a compiler step is what allows moving beyond “input file to output file”; aggregating a list of pages (e.g. blog posts) into a single archive page is the most obvious example.

But sometimes getting just the final result of the compilation step (of other identifiers) is not flexible enough—in case of HTML output, this includes the entire page, including the <html><head>…</head> part, not only the body we might be interested in. So, to ease any aggregation, one uses snapshots.


Snapshots allow, well, snapshotting the intermediate result under a specific name, to allow later retrieval:

  • saveSnapshot :: (Binary a, Typeable a) => Snapshot -> Item a -> Compiler (Item a), to save a snapshot
  • loadSnapshot :: (Binary a, Typeable a) => Identifier -> Snapshot -> Compiler (Item a), to load a snapshot, similar to load
  • loadAllSnapshots :: (Binary a, Typeable a) => Pattern -> Snapshot -> Compiler [Item a], similar to loadAll

One can save an arbitrary number of snapshots at various steps of the compilation, and then re-use them.

Note: load and loadAll are actually just the snapshot variant, with a hard-coded value for the snapshot. As I write this, the value is "_final", so probably it’s best not to use the underscore prefix for one’s own snapshots. A bit of a shame that this is not done better, type-wise.

What next?

We have rules to transform things, including smart name transforming, we have compiler functionality to transform the data. But everything mentioned until now is very generic, fundamental functionality, bare-bones to the bone (ha!).

With just this functionality, you have everything needed to build an actual site. But starting at this level would be too tedious even for hard-core fans of DIY, so Hakyll comes with some built-in extra functionality.

And that will be the next post in the series. This one is too long already :)

Steinar H. Gunderson: Debian CEF packages

21 March, 2018 - 06:38

I've created some Debian CEF packages—CEF isn't the easiest thing to package (and it takes an hour to build even on my 20-core server, since it needs to build basically all of Chromium), but it's fairly rewarding to see everything fall into place. It should benefit not only Nageru, but also OBS and potentially CasparCG if anyone wants to package that.

It's not in the NEW queue because it depends on a patch to chromium that I hope the Chromium maintainers are brave enough to include. :-)

Reproducible builds folks: Reproducible Builds: Weekly report #151

21 March, 2018 - 02:59

Here's what happened in the Reproducible Builds effort between Sunday March 11 and Saturday March 17 2018:

Upcoming events Patches submitted Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (168)
  • Emmanuel Bourg (2)
  • Pirate Praveen (1)
  • Tiago Stürmer Daitx (1)

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Neil McGovern: ED Update – week 11

20 March, 2018 - 22:45

It’s time (Well, long overdue) for a quick update on stuff I’ve been doing recently, and some things that are coming up. I’ve worked out a new way of doing these, so they should be more regular now, about every couple of weeks or so.

  • The annual report is moving ahead. I’ve moved up the timelines a bit here from previous years, so hopefully, the people who very kindly help author this can remember what we did in the 2016/17 financial year!
  • GUADEC/GNOME.Asia/LAS sponsorship – elements are coming together for the sponsorship brochure
    • Some sponsors are lined up, and these will be announced by the usual channels – thanks to everyone who supports the project and our conferences!
  • Shell Extensions – It’s been noticed that reviews of extensions have been taking quite some time recently, so I’ve stepped in to help. I still think that part of the process could be automated, but at the moment it’s quite manual. Help is very much appreciated!
  • The Code of Conduct consultation has been useful, and there’s been a couple of points raised where clarity could be added. I’m getting those drafted at the moment, and hope to get the board to approve this soon.
  • A couple of administrative bits:
    • We now have a filing system for paperwork in NextCloud
    • Reviewing accounts for the end of year accounts – it’s the end of the tax year, so our finances need to go to the IRS
    • Tracking of accounts receivable hasn’t been great in the past, probably not helped by GNUCash. I’m looking at alternatives at the moment.
  • Helping out with a couple of trademark issues that have come up
  • Regular working sessions for Flathub legal bits with our lawyers
  • I’ll be at LibrePlanet 2018 this weekend, and I’m giving a talk on Sunday. With the FSF, we’re hosting a SpinachCon on Friday. This aims to do some usability testing and finding those small things which annoy people.

Holger Levsen: 20180319-some-problems

20 March, 2018 - 19:26
Some problems with Code of Conducts

shiromarieke took her time and wrote an IMHO very good text about problems with Code of Conducts, which I wholeheartly recommend to read.

I'll just quote two sentences which I think are essential:

Quote 1: "This is not a rant - it is a call for action: Let's gather, let's build the structures we need to make all people feel safe and respected in our communities." - in that sense, if you have feedback, please share it with shiromarieke as suggested by her. I'm very thankful she is taking the time to discuss her critism and work on possible improvements! (I'll likely not discuss this online though I'll be happy to discuss offline.) I just wanted to share this link with the Debian communities, as I agree with many of shiromarieke's points and because I want to support effords to improve this, as I believe those efforts will benefit everyone (as diversity and a welcoming athmospehre benefits everyone).

Quote 2: "Although I don't believe CoC are a good solution to help fix problems I have and will always do my best to respect existing CoC of workplaces, events or other groups I am involved with and I am thankful for your attempt to make our places and communities safer." - me too.

Daniel Pocock: Can a GSoC project beat Cambridge Analytica at their own game?

20 March, 2018 - 19:15

A few weeks ago, I proposed a GSoC project on the topic of Firefox and Thunderbird plugins for Free Software Habits.

At first glance, this topic may seem innocent and mundane. After all, we all know what habits are, don't we? There are already plugins that help people avoid visiting Facebook too many times in one day, what difference will another one make?

Yet the success of companies like Facebook and those that prey on their users, like Cambridge Analytica (who are facing the prospect of a search warrant today), is down to habits: in other words, the things that users do over and over again without consciously thinking about it. That is exactly why this plugin is relevant.

Many students have expressed interest and I'm keen to find out if any other people may want to act as co-mentors (more information or email me).

One Facebook whistleblower recently spoke about his abhorrence of the dopamine-driven feedback loops that keep users under a spell.

The game changer

Can we use the transparency of free software to help users re-wire those feedback loops for the benefit of themselves and society at large? In other words, instead of letting their minds be hacked by Facebook and Cambridge Analytica, can we give users the power to hack themselves?

In his book The Power of Habit, Charles Duhigg lays bare the psychology and neuroscience behind habits. While reading the book, I frequently came across concepts that appeared immediately relevant to the habits of software engineers and also the field of computer security, even though neither of these topics is discussed in the book.

Most significantly, Duhigg finishes with an appendix on how to identify and re-wire your habits and he has made it available online. In other words, a quickstart guide to hack yourself: could Duhigg's formula help the proposed plugin succeed where others have failed?

If you could change one habit, you could change your life

The book starts with examples of people who changed a single habit and completely reinvented themselves. For example, an overweight alcoholic and smoker who became a super-fit marathon runner. In each case, they show how the person changed a single keystone habit and everything else fell into place. Wouldn't you like to have that power in your own life?

Wouldn't it be even better to share that opportunity with your friends and family?

One of the challenges we face in developing and promoting free software is that every day, with every new cloud service, the average person in the street, including our friends, families and co-workers, is ingesting habits carefully engineered for the benefit of somebody else. Do you feel that asking your friends and co-workers not to engage you in these services has become a game of whack-a-mole?

Providing a simple and concise solution, such as a plugin, can help people to find their keystone habits and then help them change them without stress or criticism. Many people want to do the right thing: if it can be made easier for them, with the right messages, at the right time, delivered in a positive manner, people feel good about taking back control. For example, if somebody has spent 15 minutes creating a Doodle poll and sending the link to 50 people, is there any easy way to communicate your concerns about Doodle? If a plugin could highlight an alternative before they invest their time in Doodle, won't they feel better?

If you would like to provide feedback or even help this project go ahead, you can subscribe here and post feedback to the thread or just email me.

Jonathan McDowell: First impressions of the Gemini PDA

20 March, 2018 - 03:41

Last March I discovered the IndieGoGo campaign for the Gemini PDA, a plan to produce a modern PDA with a decent keyboard inspired by the Psion 5. At that point in time the estimated delivery date was November 2017, and it wasn’t clear they were going to meet their goals. As someone has owned a variety of phones with keyboards, from a Nokia 9000i to a T-Mobile G1 I’ve been disappointed about the lack of mobile devices with keyboards. The Gemini seemed like a potential option, so I backed it, paying a total of $369 including delivery. And then I waited. And waited. And waited.

Finally, one year and a day after I backed the project, I received my Gemini PDA. Now, I don’t get as much use out of such a device as I would have in the past. The Gemini is definitely not a primary phone replacement. It’s not much bigger than my aging Honor 7 but there’s no external display to indicate who’s calling and it’s a bit clunky to have to open it to dial (I don’t trust Google Assistant to cope with my accent enough to have it ring random people). The 9000i did this well with an external keypad and LCD screen, but then it was a brick so it had the real estate to do such things. Anyway. I have a laptop at home, a laptop at work and I cycle between the 2. So I’m mostly either in close proximity to something portable enough to move around the building, or travelling in a way that doesn’t mean I could use one.

My first opportunity to actually use the Gemini in anger therefore came last Friday, when I attended BelFOSS. I’d normally bring a laptop to a conference, but instead I decided to just bring the Gemini (in addition to my normal phone). I have the LTE version, so I put my FreedomPop SIM into it - this did limit the amount I could do with it due to the low data cap, but for a single day was plenty for SSH, email + web use. I already have the Pro version of the excellent JuiceSSH, am a happy user of K-9 Mail and tend to use Chrome these days as well. All 3 were obviously perfectly happy on the Android 7.1.1 install.

Aside: Why am I not running Debian on the device? Planet do have an image available form their Linux Support page, but it’s running on top of the crufty 3.18 Android kernel and isn’t yet a first class citizen - it’s not clear the LTE will work outside Android easily and I’ve no hope of ARM opening up the Mali-T880 drivers. I’ve got plans to play around with improving the support, but for the moment I want to actually use the device a bit until I find sufficient time to be able to make progress.

So how did the day go? On the whole, a success. Battery life was great - I’d brought a USB battery pack expecting to need to boost the charge at some point, but I last charged it on Thursday night and at the time of writing it’s still claiming 25% battery left. LTE worked just fine; I had a 4G signal for most of the day with occasional drops down to 3G but no noticeable issues. The keyboard worked just fine; much better than my usual combo of a Nexus 7 + foldable Bluetooth keyboard. Some of the symbols aren’t where you’d expect, but that’s understandable on a scaled down keyboard. Screen resolution is great. I haven’t used the USB-C ports other than to charge and backup so far, but I like the fact there are 2 provided (even if you need a custom cable to get HDMI rather than it following the proper standard). The device feels nice and solid in your hand - the case is mostly metal plates that remove to give access to the SIM slot and (non-removable but user replaceable) battery. The hinge mechanism seems robust; I haven’t been worried about breaking it at any point since I got the device.

What about problems? I can’t deny there are a few. I ended up with a Mediatek X25 instead of an X27 - that matches what was initial promised, but there had been claims of an upgrade. Unfortunately issues at the factory meant that the initial production run got the older CPU. Later backers are support to get the upgrade. As someone who took the early risk this does leave a slightly bitter taste but I doubt I’ll actually notice any significant performance difference. The keys on the keyboard are a little lop sided in places. This seems to be just a cosmetic thing and I haven’t noticed any issues in typing. The lack of first class Debian support is disappointing, but I believe will be resolved in time (by the community if not Planet). The camera isn’t as good as my phone, but then it’s a front facing webcam style thing and it’s at least as good as my laptop at that.

Bottom line: Would I buy it again? At $369, absolutely. At the current $599? Probably not - I’m simply not on the move enough to need this on a regular basis, so I’d find it hard to justify. Maybe the 2nd gen, assuming it gets a bit more polish on the execution and proper mainline Linux support. Don’t get me wrong, I think the 1st gen is lovely and I’ve had lots of envious people admiring it, I just think it’s ended up priced a bit high for what it is. For the same money I’d be tempted by the GPD Pocket instead.

Daniel Pocock: GSoC and Outreachy: Mentors don't need to be Debian Developers

19 March, 2018 - 15:10

A frequent response I receive when talking to prospective mentors: "I'm not a Debian Developer yet".

As student applications have started coming in, now is the time for any prospective mentors to introduce yourself on the debian-outreach list if you would like to help with any of the listed projects or any topics that have been proposed spontaneously by students without any mentor.

It doesn't matter if you are a Debian Developer or not. Furthermore, mentoring in a program like GSoC or Outreachy is a form of volunteering that is recognized just as highly as packaging or any other development activity.

When an existing developer writes an email advocating your application to become a developer yourself, they can refer to your contribution as a mentor. Many other processes, such as requests for DebConf bursaries, also ask for a list of your contributions and you can mention your mentoring experience there.

With the student deadline on 27 March, it is really important to understand the capacity of the mentoring team over the next 10 days so we can decide how many projects can realistically be supported. Please ask on the debian-outreach list if you have any questions about getting involved.

Vincent Bernat: Integration of a Go service with systemd: socket activation

19 March, 2018 - 14:45

In a previous post, I highlighted some useful features of systemd when writing a service in Go, notably to signal readiness and prove liveness. Another interesting bit is socket activation: systemd listens on behalf of the application and, on incoming traffic, starts the service with a copy of the listening socket. Lennart Poettering details in a blog post:

If a service dies, its listening socket stays around, not losing a single message. After a restart of the crashed service it can continue right where it left off. If a service is upgraded we can restart the service while keeping around its sockets, thus ensuring the service is continously responsive. Not a single connection is lost during the upgrade.

This is one solution to get zero-downtime deployment for your application. Another upside is you can run your daemon with less privileges—loosing rights is a difficult task in Go.1

The basics🔗

Let’s take back our nifty 404-only web server:

package main

import (

func main() {
    listener, err := net.Listen("tcp", ":8081")
    if err != nil {
        log.Panicf("cannot listen: %s", err)
    http.Serve(listener, nil)

Here is the socket-activated version, using go-systemd:

package main

import (


func main() {
    listeners, err := activation.Listeners(true) // ❶
    if err != nil {
        log.Panicf("cannot retrieve listeners: %s", err)
    if len(listeners) != 1 {
        log.Panicf("unexpected number of socket activation (%d != 1)",
    http.Serve(listeners[0], nil) // ❷

In ❶, we retrieve the listening sockets provided by systemd. In ❷, we use the first one to serve HTTP requests. Let’s test the result with systemd-socket-activate:

$ go build 404.go
$ systemd-socket-activate -l 8000 ./404
Listening on [::]:8000 as 3.

In another terminal, we can make some requests to the service:

$ curl '[::1]':8000
404 page not found
$ curl '[::1]':8000
404 page not found

For a proper integration with systemd, you need two files:

  • a socket unit for the listening socket, and
  • a service unit for the associated service.

We can use the following socket unit, 404.socket:

ListenStream = 8000
BindIPv6Only = both

WantedBy =

The systemd.socket(5) manual page describes the available options. BindIPv6Only = both is explicitely specified because the default value is distribution-dependent. As for the service unit, we can use the following one, 404.service:

Description = 404 micro-service

ExecStart = /usr/bin/404

systemd knows the two files work together because they share the same prefix. Once the files are in /etc/systemd/system, execute systemctl daemon-reload and systemctl start 404.​socket. Your service is ready to accept connections!

Handling of existing connections🔗

Our 404 service has a major shortcoming: existing connections are abruptly killed when the daemon is stopped or restarted. Let’s fix that!

Waiting a few seconds for existing connections🔗

We can include a short grace period for connections to terminate, then kill remaining ones:

// On signal, gracefully shut down the server and wait 5
// seconds for current connections to stop.
done := make(chan struct{})
quit := make(chan os.Signal, 1)
server := &http.Server{}
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)

go func() {
    log.Println("server is shutting down")
    ctx, cancel := context.WithTimeout(context.Background(),
    defer cancel()
    if err := server.Shutdown(ctx); err != nil {
        log.Panicf("cannot gracefully shut down the server: %s", err)

// Start accepting connections.

// Wait for existing connections before exiting.

Upon reception of a termination signal, the goroutine would resume and schedule a shutdown of the service:

Shutdown() gracefully shuts down the server without interrupting any active connections. Shutdown() works by first closing all open listeners, then closing all idle connections, and then waiting indefinitely for connections to return to idle and then shut down.

While restarting, new connections are not accepted: they sit in the listen queue associated to the socket. This queue is bounded and its size can be configured with the Backlog directive in the socket unit. Its default value is 128. You may keep this value, even when your service is expecting to receive many connections by second. When this value is exceeded, incoming connections are silently dropped. The client should automatically retry to connect. On Linux, by default, it will retry 5 times (tcp_syn_retries) in about 3 minutes. This is a nice way to avoid the herd effect you would experience on restart if you increased the listen queue to some high value.

Waiting longer for existing connections🔗

If you want to wait for a very long time for existing connections to stop, you do not want to ignore new connections for several minutes. There is a very simple trick: ask systemd to not kill any process on stop. With KillMode = none, only the stop command is executed and all existing processes are left undisturbed:

Description = slow 404 micro-service

ExecStart = /usr/bin/404
ExecStop  = /bin/kill $MAINPID
KillMode  = none

If you restart the service, the current process gracefully shuts down for as long as needed and systemd spawns immediately a new instance ready to serve incoming requests with its own copy of the listening socket. On the other hand, we loose the ability to wait for the service to come to a full stop—either by itself or forcefully after a timeout with SIGKILL.

Waiting longer for existing connections (alternative)🔗

An alternative to the previous solution is to make systemd believe your service died during reload.

done := make(chan struct{})
quit := make(chan os.Signal, 1)
server := &http.Server{}
    // for reload:
    // for stop or full restart:
    syscall.SIGINT, syscall.SIGTERM)
go func() {
    sig := <-quit
    switch sig {
    case syscall.SIGINT, syscall.SIGTERM:
        // Shutdown with a time limit.
        log.Println("server is shutting down")
        ctx, cancel := context.WithTimeout(context.Background(),
        defer cancel()
        if err := server.Shutdown(ctx); err != nil {
            log.Panicf("cannot gracefully shut down the server: %s", err)
    case syscall.SIGHUP: // ❶
        // Execute a short-lived process and asks systemd to
        // track it instead of us.
        log.Println("server is reloading")
        pid := daemonizeSleep()
        daemon.SdNotify(false, fmt.Sprintf("MAINPID=%d", pid))
        time.Sleep(time.Second) // Wait a bit for systemd to check the PID

        // Wait without a limit for current connections to stop.
        if err := server.Shutdown(context.Background()); err != nil {
            log.Panicf("cannot gracefully shut down the server: %s", err)

// Serve requests with a slow handler.
server.Handler = http.HandlerFunc(
    func(w http.ResponseWriter, r *http.Request) {
        time.Sleep(10 * time.Second)
        http.Error(w, "404 not found", http.StatusNotFound)

// Wait for all connections to terminate.
log.Println("server terminated")

The main difference is the handling of the SIGHUP signal in ❶: a short-lived decoy process is spawned and systemd is told to track it. When it dies, systemd will start a new instance. This method is a bit hacky: systemd needs the decoy process to be a child of PID 1 but Go cannot daemonize on its own. Therefore, we leverage a short Python helper, wrapped in a daemonizeSleep() function:2

// daemonizeSleep spawns a daemon process sleeping
// one second and returns its PID.
func daemonizeSleep() uint64 {
    py := `
import os
import time

r, w = os.pipe()
pid1 = os.fork()
if pid1 == 0:
    pid2 = os.fork()
    if pid2 == 0:
        for fd in {w, 0, 1, 2}:
        os.write(w, str(pid2).encode("ascii"))
    print(, 64).decode("ascii"))
    cmd := exec.Command("/usr/bin/python3", "-c", py)
    out, err := cmd.Output()
    if err != nil {
        log.Panicf("cannot execute sleep command: %s", err)
    pid, err := strconv.ParseUint(strings.TrimSpace(string(out)), 10, 64)
    if err != nil {
        log.Panicf("cannot parse PID of sleep command: %s", err)
    return pid

During reload, there may be a small period during which both the new and the old processes accept incoming requests. If you don’t want that, you can move the creation of the short-lived process outside the goroutine, after server.Serve(), or implement some synchronization mechanism. There is also a possible race-condition when we tell systemd to track another PID—see PR #7816.

The 404.service unit needs an update:

Description = slow 404 micro-service

ExecStart    = /usr/bin/404
ExecReload   = /bin/kill -HUP $MAINPID
Restart      = always
NotifyAccess = main
KillMode     = process

Each additional directive is significant:

  • ExecReload tells how to reload the process—by sending SIGHUP.
  • Restart tells to restart the process if it stops “unexpectedly”, notably on reload.3
  • NotifyAccess specifies which process can send notifications, like a PID change.
  • KillMode tells to only kill the main identified process—others are left untouched.
Zero-downtime deployment?🔗

Zero-downtime deployment is a difficult endeavor on Linux. For example, HAProxy had a long list of hacks until a proper—and complex—solution was implemented in HAproxy 1.8. How do we fare with our simple implementation?

From the kernel point of view, there is a only one socket with a unique listen queue. This socket is associated to several file descriptors: one in systemd and one in the current process. The socket stays alive as long as there is at least one file descriptor. An incoming connection is put by the kernel in the listen queue and can be dequeued from any file descriptor with the accept() syscall. Therefore, this approach actually achieves zero-downtime deployment: no incoming connection is rejected.

By contrast, HAProxy was using several different sockets listening to the same addresses, thanks to the SO_REUSEPORT option.4 Each socket gets its own listening queue and the kernel balances incoming connections between each queue. When a socket gets closed, the content of its queue is lost. If an incoming connection was sitting here, it would receive a reset. An elegant patch for Linux to signal a socket should not receive new connections was rejected. HAProxy 1.8 is now recycling existing sockets to the new processes through an Unix socket.

I hope this post and the previous one show how systemd is a good sidekick for a Go service: readiness, liveness and socket activation are some of the useful features you can get to build a more reliable application.

Addendum: identifying sockets by name🔗

For a given service, systemd can provide several sockets. To identify them, it is possible to name them. Let’s suppose we also want to return 403 error codes from the same service but on a different port. We add an additional socket unit definition, 403.socket, linked to the same 404.service job:

ListenStream = 8001
BindIPv6Only = both
Service      = 404.service


Unless overridden with FileDescriptorName, the name of the socket is the name of the unit: 403.socket. Currently, go-systemd does not expose these names yet. However, they can be extracted from the LISTEN_FDNAMES environment variable:

package main

import (


func main() {
    var wg sync.WaitGroup

    // Map socket names to handlers.
    handlers := map[string]http.HandlerFunc{
        "404.socket": http.NotFound,
        "403.socket": func(w http.ResponseWriter, r *http.Request) {
            http.Error(w, "403 forbidden",

    // Get socket names.
    names := strings.Split(os.Getenv("LISTEN_FDNAMES"), ":")

    // Get listening sockets.
    listeners, err := activation.Listeners(true)
    if err != nil {
        log.Panicf("cannot retrieve listeners: %s", err)

    // For each listening socket, spawn a goroutine
    // with the appropriate handler.
    for idx := range names {
        go func(idx int) {
            defer wg.Done()

    // Wait for all goroutines to terminate.

Let’s build the service and run it with systemd-socket-activate:

$ go build 404.go
$ systemd-socket-activate -l 8000 -l 8001 \
>                         --fdname=404.socket:403.socket \
>                         ./404
Listening on [::]:8000 as 3.
Listening on [::]:8001 as 4.

In another console, we can make a request for each endpoint:

$ curl '[::1]':8000
404 page not found
$ curl '[::1]':8001
403 forbidden
  1. Many process characteristics in Linux are attached to threads. Go runtime transparently manages them without much user control. Until recently, this made some features, like setuid() or setns(), unusable. ↩︎

  2. Python is a good candidate: it’s likely to be available on the system, it is low-level enough to easily implement the functionality and, as an interpreted language, it doesn’t require a specific build step. ↩︎

  3. This is not an essential directive as the process is also restarted through socket-activation. ↩︎

  4. This approach is more convenient when reloading since you don’t have to figure out which sockets to reuse and which ones to create from scratch. Moreover, when several processes need to accept connections, using multiple sockets is more scalable as the different processes won’t fight over a shared lock to accept connections. ↩︎

Steve Kemp: Serverless deployment via docker

19 March, 2018 - 14:01

I've been thinking about serverless-stuff recently, because I've been re-deploying a bunch of services and some of them could are almost microservices. One thing that a lot of my things have in common is that they're all simple HTTP-servers, presenting an API or end-point over HTTP. There is no state, no database, and no complex dependencies.

These should be prime candidates for serverless deployment, but at the same time I don't want to have to recode them for AWS Lamda, or any similar locked-down service. So docker is the obvious answer.

Let us pretend I have ten HTTP-based services, each of which each binds to port 8000. To make these available I could just setup a simple HTTP front-end:


We'd need to route the request to the appropriate back-end, so we'd start to present URLs like:


Here any request which had the prefix steve/foo would be routed to a running instance of the docker container steve/foo. In short the name of the (first) path component performs the mapping to the back-end.

I wrote a quick hack, in golang, which would bind to port 80 and dynamically launch the appropriate containers, then proxy back and forth. I soon realized that this is a terrible idea though! The problem is a malicious client could start making requests for things like:


That would trigger my API-proxy to download the containers and spin them up. Allowing running arbitrary (albeit "sandboxed") code. So taking a step back, we want to use the path-component of an URL to decide where to route the traffic? Each container will bind to :8000 on its private (docker) IP? There's an obvious solution here: HAProxy.

So I started again, I wrote a trivial golang deamon which will react to docker events - containers starting and stopping - and generate a suitable haproxy configuration file, which can then be used to reload haproxy.

The end result is that if I launch a container named "foo" then requests to will reach it. Success! The only downside to this approach is that you must manually launch your back-end docker containers - but if you do so they'll become immediately available.

I guess there is another advantage. Since you're launching the containers (manually) you can setup links, volumes, and what-not. Much more so than if your API layer span them up with zero per-container knowledge.

Michael Stapelberg: sbuild-debian-developer-setup(1)

19 March, 2018 - 14:00

I have heard a number of times that sbuild is too hard to get started with, and hence people don’t use it.

To reduce hurdles from using/contributing to Debian, I wanted to make sbuild easier to set up.

sbuild ≥ 0.74.0 provides a Debian package called sbuild-debian-developer-setup. Once installed, run the sbuild-debian-developer-setup(1) command to create a chroot suitable for building packages for Debian unstable.

On a system without any sbuild/schroot bits installed, a transcript of the full setup looks like this:

% sudo apt install -t unstable sbuild-debian-developer-setup
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  libsbuild-perl sbuild schroot
Suggested packages:
  deborphan btrfs-tools aufs-tools | unionfs-fuse qemu-user-static
Recommended packages:
  exim4 | mail-transport-agent autopkgtest
The following NEW packages will be installed:
  libsbuild-perl sbuild sbuild-debian-developer-setup schroot
0 upgraded, 4 newly installed, 0 to remove and 1454 not upgraded.
Need to get 1.106 kB of archives.
After this operation, 3.556 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://localhost:3142/ unstable/main amd64 libsbuild-perl all 0.74.0-1 [129 kB]
Get:2 http://localhost:3142/ unstable/main amd64 sbuild all 0.74.0-1 [142 kB]
Get:3 http://localhost:3142/ testing/main amd64 schroot amd64 1.6.10-4 [772 kB]
Get:4 http://localhost:3142/ unstable/main amd64 sbuild-debian-developer-setup all 0.74.0-1 [62,6 kB]
Fetched 1.106 kB in 0s (5.036 kB/s)
Selecting previously unselected package libsbuild-perl.
(Reading database ... 276684 files and directories currently installed.)
Preparing to unpack .../libsbuild-perl_0.74.0-1_all.deb ...
Unpacking libsbuild-perl (0.74.0-1) ...
Selecting previously unselected package sbuild.
Preparing to unpack .../sbuild_0.74.0-1_all.deb ...
Unpacking sbuild (0.74.0-1) ...
Selecting previously unselected package schroot.
Preparing to unpack .../schroot_1.6.10-4_amd64.deb ...
Unpacking schroot (1.6.10-4) ...
Selecting previously unselected package sbuild-debian-developer-setup.
Preparing to unpack .../sbuild-debian-developer-setup_0.74.0-1_all.deb ...
Unpacking sbuild-debian-developer-setup (0.74.0-1) ...
Processing triggers for systemd (236-1) ...
Setting up schroot (1.6.10-4) ...
Created symlink /etc/systemd/system/ → /lib/systemd/system/schroot.service.
Setting up libsbuild-perl (0.74.0-1) ...
Processing triggers for man-db ( ...
Setting up sbuild (0.74.0-1) ...
Setting up sbuild-debian-developer-setup (0.74.0-1) ...
Processing triggers for systemd (236-1) ...

% sudo sbuild-debian-developer-setup
The user `michael' is already a member of `sbuild'.
I: SUITE: unstable
I: TARGET: /srv/chroot/unstable-amd64-sbuild
I: MIRROR: http://localhost:3142/
I: Running debootstrap --arch=amd64 --variant=buildd --verbose --include=fakeroot,build-essential,eatmydata --components=main --resolve-deps unstable /srv/chroot/unstable-amd64-sbuild http://localhost:3142/
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages 
I: Validating Packages 
I: Found packages in base already in required: apt 
I: Resolving dependencies of required packages...
I: Successfully set up unstable chroot.
I: Run "sbuild-adduser" to add new sbuild users.
ln -s /usr/share/doc/sbuild/examples/sbuild-update-all /etc/cron.daily/sbuild-debian-developer-setup-update-all
Now run `newgrp sbuild', or log out and log in again.

% newgrp sbuild

% sbuild -d unstable hello
sbuild (Debian sbuild) 0.74.0 (14 Mar 2018) on x1

| hello (amd64)                                Mon, 19 Mar 2018 07:46:14 +0000 |

Package: hello
Distribution: unstable
Machine Architecture: amd64
Host Architecture: amd64
Build Architecture: amd64
Build Type: binary

I hope you’ll find this useful.

Dirk Eddelbuettel: RcppSMC 0.2.1: A few new tricks

19 March, 2018 - 04:16

A new release, now at 0.2.1, of the RcppSMC package arrived on CRAN earlier this afternoon (and once again as a very quick pretest-publish within minutes of submission).

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts .

This releases contains a few bug fixes and one minor rearrangment allowing header-only use of the package from other packages, or via a Rcpp plugin. Many of these changes were driven by new contributors, which is a wonderful thing to see for any open source project! So thanks to everybody who helped with. Full details below.

Changes in RcppSMC version 0.2.1 (2018-03-18)
  • The sampler now has a copy constructor and assignment overload (Brian Ni in #28).

  • The SMC library component can now be used in header-only mode (Martin Lysy in #29).

  • Plugin support was added for use via cppFunction() and other Rcpp Attributes (or inline functions (Dirk in #30).

  • The sampler copy ctor/assigment operator is now copy-constructor safe (Martin Lysy In #32).

  • A bug in state variance calculation was corrected (Adam in #36 addressing #34).

  • History getter methods are now more user-friendly (Tiberiu Lepadatu in #37).

  • Use of pow with atomic types was disambiguated to std::pow) to help the Solaris compiler (Dirk in #42).

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: control-archive 1.8.0

19 March, 2018 - 04:14

This is the software that maintains the archive of control messages and the newsgroups and active files on I update things in place, but it's been a while since I made a formal release, and one seemed overdue (particularly since it needed some compatibility tweaks for GnuPG v1).

In code changes, signing IDs with whitespace are now supported, summaries when there is no log file for the summary period don't produce an error, and gpg1 is now used explicitly with flags to allow weak digest algorithms since the state of crypto for Usenet control messages is rather dire.

On the documentation side, there are multiple fixes to the README.html file that's also shipped with pgpcontrol, updating email addresses, URLs, package versions, and various other details.

For hierarchy changes, the grisbi.* key has been cleaned up a bit for hopefully more reliable verification, and everything related to gov.* has been dropped.

You can get the latest release from the control-archive distribution page.

François Marier: Dynamic DNS on your own domain

19 March, 2018 - 03:45

I recently moved my dynamic DNS hostnames from (now owned by Oracle) to No-IP. In the process, I moved all of my hostnames under a sub-domain that I control in case I ever want to self-host the authoritative DNS server for it.

Creating an account

In order to use my own existing domain, I registered for the Plus Managed DNS service and provided my top-level domain (

Then I created a support ticket to ask for the sub-domain feature. Without that, No-IP expects you to delegate your entire domain to them, whereas I only wanted to delegate *

Once that got enabled, I was able to create hostnames like machine.dyn in the No-IP control panel. Without the sub-domain feature, you can't have dots in hostnames.

I used a bogus IP address (e.g. for all of the hostnames I created in order to easily confirm that the client software is working.

DNS setup

On my registrar's side, here are the DNS records I had to add to delegate anything under to No-IP:

dyn NS
dyn NS
dyn NS
dyn NS
dyn NS
Client setup

In order to update its IP address whenever it changes, I installed ddclient on each of my machines:

apt install ddclient

While the ddclient package won't help you configure your No-IP service during installation or enable the web IP lookup method, this can all be done by editing the configuration after the fact.

I put the following in /etc/ddclient.conf:

use=web,, web-skip='IP Address'

and the following in /etc/default/ddclient:


Then restart the service:

systemctl restart ddclient.service

Note that you do need to change the default update interval or the server will ban your IP address.


To test that the client software is working, wait 6 minutes (there is an internal check which cancels any client invocations within 5 minutes of another), then run it manually:

ddclient --verbose --debug

The IP for that machine should now be visible on the No-IP control panel and in DNS lookups:

dig +short

Iustin Pop: New site layout

18 March, 2018 - 23:50

With the move to Hakyll, I thought whether to also get rid of the old /~iustin on my homepage address. I don’t remember exactly why I chose that layout - maybe I thought I’ll use my domain ( for other purposes? But there are other ways to do that (e.g.

So, from today, my new homepage address is simply

Iustin Pop: Goodbye Ikiwiki, hello Hakyll!

18 March, 2018 - 22:05

For a while now, I was somewhat unhappy with Ikiwiki, for very “important” reasons.

First, it’s written in Perl, and I haven’t written serious Perl for around 20 years (and that was not serious code). So, me extending it if needed is unlikely, and in all my years of using it I haven’t touched anything except the config file.

Second, and these the real reasons, Ikiwiki is too complex. The templating system is oh-so-verbose. I tried to move to Bootstrap for the styling of my blog (I don’t have time myself to learn enough CSS for responsive, nice and clean sites), but editing the default page template was giving me head-aches. Its wiki origins mean tight integration with the source repository, and the software automatically committing stuff to git as it needed (e.g. new tag pages, calendar updates, etc.) which is overhead for what should basically be a static web site.

So, in the interest of throwing the baby with the bathwater, I said let’s give Hakyll a go. It’s written in Haskell, so everything will be good, right?

And it is indeed. It’s so bare-bones that doing anything non-trivial (as in not just a plain page) requires writing code. The exercise of having a home-page/blog was, during the past week as I worked on converting to it, a programming exercise. Which is quite strange in itself, but for me it works - another excuse for Haskell.

Now, Ikiwiki is a real wiki engine, so didn’t I lose too much by moving to Hakyll, which is just static site generator? No, I actually stopped using the “live” functionality of Ikiwiki (its cgi-bin script) a long while ago, as I disabled the commenting functionality; there was just too much spam. And live editing of pages was never needed for my use-case.

This migration resulted in some downsides, though:

  • I haven’t yet, and probably will never import the ~100 or so comments that I had on the old pages; as said, I stopped this a long while ago (around 2013), so…
  • I reorganised the URLs, and a lot of my old posts were not conforming to any scheme, so posts up until mid-2016 do not have the right redirects; I’ll possibly fix this sometimes soon. Posts newer than that date already had date in the URL, and these have a generic redirection in place.
  • Hakyll doesn’t include the tags in the atom/rss feeds “categories”, so this is a downgrade from before
  • Because Hakyll is more extensible, I can use canned stuff (e.g. Bootstrap, Font Awesome) more easily, which means bigger site; the previous one was really trivial in terms of size.

Internally (for my self), there are a few more issues: Ikiwiki came with a lot of real functionality, that is now missing; as a trivial example, shortcuts like [[!wiki Foobar]], which I now have to replicate.

But with all the above said, there are good parts as well. The entire site is responsive design now, and both old and future posts that include images will be much nicer behaving on non-large-desktop-viewing case. Instead of ~60 lines of (non-commented-out) configuration, I now have 200 lines of Haskell code (ignoring comments, etc.); this is a net win, right?

On top of that, because of the lack of built-in things, I had to learn how to use Hakyll, so now I can (and already did) much more customisation to the html output; random example: for linking to internal pictures, I have a simple macro:

$pic("xxx.jpg", "alt-text")$

which knows to look for xxx.jpg in the right place, relative to the current page URL, and all this results in a custom HTML code, rather than what the markdown engine would do by default. Case in point, and this is not hard-coded in the source of this page, but dynamically expanded from the above example (and escaped), the result is:

<figure class="figure">
  <a href="/images/2018-03-18-hello-hakyll/xxx.jpg">
    <img src="/images/2018-03-18-hello-hakyll/xxx.jpg" class="figure-img img-fluid rounded" alt="alt-text">
  <figcaption class="figure-caption text-left font-italic">alt-text</figcaption>

Another good part is that, given how bare-bones Hakyll is, if you look for “How to do foo in Hakyll”, you don’t get as a result “Use this plugin”, but rather - here’s the code how to do it.

I’ll have some more work to do before I’m happy with the end state, but - as change for the sake of change goes - this was fun stuff.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้