Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 2 hours 36 min ago

Julien Danjou: Python Tools to Try in 2021

16 June, 2021 - 22:30

The Python programming language is one of the most popular and in huge demand. It is free, has a large community, is intended for the development of projects of varying complexity, is easy to learn, and opens up great opportunities for programmers. To work comfortably with it, you need special Python tools, which are able to simplify your work. We have selected the best Python tools that will be relevant in 2021.

Popular Python Tools 2021

Python tools make life much easier for any developer and provide ample opportunities for creating effective applications or sites. These solutions help to automate different processes and minimize routine tasks.

In fact, their functionality varies considerably. Some are made for full-fledged complex multi-level development, while others have a simplified interface that allows you to develop individual modules and blocks. Before choosing a tool, you need to define objectives and understand goals. In this case, it will become clear what exactly to use.


As you may probably know, in order to send an email, you need SMTP (Simple Mail Transfer Protocol). This is because you can't just send a letter to the recipient. It needs to be sent to the server from which the recipient will download this letter using IMAP and POP3.

Mailtrap provides an opportunity to send emails in python. Moreover, Mailtrap provides #rest #api to access current emails. It can be used to automate email testing, which will improve your email marketing campaigns. For example, you can check the password recovery form in the Selenium Test and immediately see if an email was sent to the correct address. Then take a new password from the email and try to enter the site with it. Cool, isn't it?

  • All emails are in one place.
  • Mailtrap provides multiple inboxes.
  • Shared access is present.
  • It is easy to set up.
  • RESTful API

No visible disadvantages were found.


Django is a free and open-source full-stack framework. It is one of the most important and popular among Python developers. It helps you move from a prototype to a ready-made working solution in a short time since its main task is to automate processes and speed up work through associations and libraries. It’s a great choice for a product launch.

You can use Django if at least a few of the following points interest you:

  • There is a need to develop the server-side of the API.
  • You need to develop a web application.
  • In the course of work, many changes are made, you have to constantly deploy the application and make edits.
  • There are many complex tasks that are difficult to solve on your own, and you will need the help of the community.
  • ORM support is needed to avoid accessing the database directly.
  • There is a need to integrate new technologies such as machine learning.

Django is a great Python Web Framework that does its job. It is not for nothing that it is one of the most popular, and is actively used by millions of developers.


Django has quite a few advantages. It contains a large number of ready-made solutions, which greatly simplifies development. Admin panel, database migration, various forms, user authentication tools are extremely helpful. The structure is very clear and simple.

A large community helps to solve almost any problem. Thanks to ORM, there is a high level of security and it is comfortable to work with databases.


Despite its powerful capabilities, Django's Python Web Framework has drawbacks. It is very massive, monolithic, therefore it develops slowly. Despite the many generic modules, the development speed of Django itself is reduced.


CherryPy is a micro-framework. It is designed to solve specific problems, capable of running the program on any operating system. CherryPy is used in the following cases:

  • To create an application with small code size.
  • There is a need to manage several servers at the same time.
  • You need to monitor the performance of applications.

CherryPy refers to Python Frameworks, which are designed for specific tasks. It's clear, user-friendly, and ideal for Android development.


CherryPy Python tool has a friendly and understandable development environment. This is a functional and complete framework, which can be used to build good applications. The source code is open, so the platform is completely free for developers, and the community, although not too large, is very responsive, and always helps to solve problems.


There are not so many cons to this Python tool. It is not capable of performing complex tasks and functions, it is intended more for specific solutions, for example, for the development of certain plugins or modules.


Python Pyramid tool is designed for programming complex objects and solving multifunctional problems. It is used by professional programmers and is traditionally used for identification and routing. It is aimed at a wide audience and is capable of developing API prototypes.

It is used in the following cases:

  • You need problem indicator tools to make timely adjustments and edits.
  • You use several programming languages ​​at once;
  • You work with reporting and financial calculations, forecasting;
  • You need to quickly create a simple application.

At the same time, the Python Web Framework Pyramid allows you to create complex applications with great functionality like a translation software.


Pyramid does an excellent job of developing basic applications quickly. It is quite flexible and easy to learn. In fact, the key to the success of this framework is that it is completely based on fundamental principles, using simple and basic programming techniques. It is minimalistic, but at the same time offers users a lot of freedom of action. It is able to work with both small applications and powerful multifunctional programs.


It is difficult to deviate from the basic principles. This Python tool makes the decision for you. Simple programs are very easy to implement. But to do something complex and large-scale, you have to completely immerse yourself in the study of the environment and obey it.


Grok is a Python tool that works with templates. Its main task is to eliminate repetitions in the code. If the element is repeated, then the template that was already created earlier is simply applied. This greatly simplifies and speeds up the work.

Grok suits developers in the following cases:

  • If a programmer has little experience and is not yet ready to develop his modules.
  • There is a need to quickly develop a simple application.
  • The functionality of the application is simple, straightforward, and the interface does not play a key role.

The Grok framework is a child of Zope3, which was released earlier. It has a simplified structure of work, easy installation of modules, more capabilities, and better flexibility. It is designed to develop small applications. Yes, it is not intended for complex work, but due to its functionality, it allows you to quickly implement a project.


The Grok community is not very large, as this Python tool has not gained widespread popularity. Nevertheless, it is used by Python adepts for comfortable development. It is impossible to implement complex tasks on it since the possibilities are quite limited.

Grok is one of the best Python Web Frameworks. It is understandable and has enough features for comfortable development.


Web2Py is a Python tool that has its own IDEwhich, which includes a code editor, debugger, and deployment. It works great without the need for configuration or installation, provides a high level of data security, and is suitable for work on various platforms.

Web2Py is great in the following cases:

  • When there is a need to develop something on different operating systems.
  • If there is no way to install and configure the framework.
  • When a high level of data security is required, for example, when developing financial applications or sales performance management tools.
  • If you need to carefully track bugs right during development, and not during the testing phase.

Web2Py is capable of working with different protocols, has a built-in error tracker, and has a backward compatibility feature that helps to work on the basis of previous versions of the framework. This means that code maintenance becomes much easier and cheaper. It's free, open-source, and very flexible.


Among the many Python tools, there are not many that require the latest version of the language. Web2Py is one of those and won't work on Python 3 and below. Therefore, you need to constantly monitor the updates.

Web2Py does an excellent job of its tasks. It is quite simple and accessible to everyone.


BlueBream used to be called Zope3 before. It copes well with tasks of the medium and high level of complexity and is suitable for working on serious projects.


The BlueBream build system is quite powerful and suitable for complex tasks. You can create functional applications on it, and the principle of reuse of components makes the code easier. At the same time, the speed of development increases. The software can be scaled, and a transactional object database provides an easy path to store it. This means that queries are processed quickly and database management is simple.


This is not a very flexible framework, it is better to know clearly in advance what is required of it. In addition, it cannot withstand heavy loads. When working with 1000 users at the same time, it can crash and give errors. Therefore, it should be used to solve narrow problems.

Python frameworks are often designed for specific tasks. BlueBream is one of these and is suitable for applications where database management plays a key role.


Python tools come in different forms and have vastly different capabilities. There are quite a few of them, but in 2021 these will be the most popular and in demand. Experienced programmers always choose several development tools for their comfortable work.

Abhijith PA: Changing LCD screen of car infotainment system

16 June, 2021 - 12:53

I have a 2013 model used car that I bought two years ago. It came with a 7 inch touch screen infotainment system on its dash board with features like navigation, Bluetooth phone connectivity and a good FM AM radio. Except the radio I rarely use navigation or Bluetooth phone sync. After couple of months the touch started to become non-responsive. Since all important things such as call termination, mute and volume control have physical switches, I was happy with it.

During a periodic car check up on a local workshop, mechanic pulled battery terminals making the infotainment system locked. Now it ask for 4 digit pass code. Its one of their ant-theft mechanism and in order to enter those digits you need a touch responsive screen. So now I am completed locked out.

I visited service center of this car to get it changed, turns out they don’t repair it and only change by the unit. And will cost me Rs 50,000. Considering my usage is restricted to radio. That price is way too much.

So my next options were,

  1. Use a normal cheap car radio player. I dropped this plan, since stereo players comes in very small size and I might need plastic placeholders to close rest of the area from the 7 inch screen. This can mess aesthetics of dash board. And it also affect value of the car if I will sell in future.

  2. Use a 7 inch third party infotainment system available in the market. I did enquired with local car accessories shop and I dropped this plan as well because most of them are android and comes with enormous number of pre installed apps which I never gonna use it anyway. And I don’t need one more device that need to be connected to the Internet. Also the wiring at the back of these devices are different than what I have already so it need rewiring.

  3. Last option was to change the infotainment system parts my own. I was under the impression that these units are in-house made, tightly assembled and sealed with screws that you never see in your life and you will never able to be open it up, let alone fix parts.

Couple of articles showed me that car infotainment system sizes and wiring at the back are actually standardized. This particular system’s OEM is LG

After gaining confidence from several youtube videos and articles, I chose the 3rd option and started to disassemble head unit. Luckily all those screw driver heads were with me. After taking everything apart I could see back of touch screen glass bulged and pushing against the display. The back of the display had a label with its manufacturers name Innolux display and its model number AT070TN94. You might know this company from all those raspberry pi displays in the market. Also Tesla car makers use Innolux display in their infotainment unit.

I was able to spot an online reseller in Hong Kong. I only need a touch screen display but to be on the safe side I also placed order for display as well(Price difference were negligible) Took 1 month to reach at my hand due to COVID restrictions. I assembled everything together and collected radio pass code the very same day I received the item. Everything is working now. yay!

The touch screen + display cost me Rs 6000/-. I can easily get an average android infotainment around this price. But it was the price I paid for avoiding e-waste.

So if you have a broken head unit in your pre 2015 era car. Try to change the parts yourself. Its not complicated like our mobile phones.

Steinar H. Gunderson: Nikon D4 repair

16 June, 2021 - 02:45
“Various error messages saved on sequence and aperture control unit in front module. Error also appears on testing here. The service period for this product is expired, and Nikon will not deliver parts. Thus, we can unfortunately not repair your camera. Charged for inspection.”

So, seriously, Nikon, I could understand it if this were a dinky $200 compact, or a phone which could no longer keep up with the burdens of the ecosystem it was part of, but this is a 2012 flagship DSLR. You pay $6000, and yet you can't even get parts nine years later? I'm usually not the one to complain the loudest about “planned obsolescence”, but I think stocking up on parts should be possible. :-)

Supposedly, a third-party repair shop still has D4 parts and the know-how to switch them without messing things up (which I don't). So with some luck, I'll get five more years or so out of it. Oh well, 131k exposures isn't bad at any rate…

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2021

15 June, 2021 - 19:04

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In May, we again put aside 2100 EUR to fund Debian projects. There was no proposals for new projects received, thus we’re looking forward to receive more projects from various Debian teams! Please do not hesitate to submit a proposal, if there is a project that could benefit from the funding!

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In May, 12 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 7.0h (out of 14h assigned and 12h from April), thus carrying over 19h to June.
  • Anton Gladky did 12h (out of 12h assigned).
  • Ben Hutchings did 16h (out of 13.5h assigned plus 4.5h from April), thus is carrying over 2h for June.
  • Chris Lamb did 18h (out of 18h assigned).
  • Holger Levsen‘s work was coordinating/managing the LTS team, he did 5.5h and gave back 6.5h to the pool.
  • Markus Koschany did 15h (out of 29.75h assigned and 15h from April), thus carrying over 29.75h to June.
  • Ola Lundqvist did 12h (out of 12h assigned and 4.5h from April), thus carrying over 4.5h to June.
  • Roberto C. Sánchez did 7.5h (out of 27.5h assigned and 27h from April), and gave back 47h to the pool.
  • Sylvain Beucler did 29.75h (out of 29.75h assigned).
  • Thorsten Alteholz did 29.75h (out of 29.75h assigned).
  • Utkarsh Gupta did 29.75h (out of 29.75h assigned).
Evolution of the situation

In May we released 33 DLAs and mostly skipped our public IRC meeting and the end of the month. In June we’ll have another team meeting using video as lined out on our LTS meeting page.
Also, two months ago we announced that Holger would step back from his coordinator role and today we are announcing that he is back for the time being, until a new coordinator is found.
Finally, we would like to remark once again that we are constantly looking for new contributors. Please contact Holger if you are interested!

The security tracker currently lists 41 packages with a known CVE and the dla-needed.txt file has 21 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

Ben Hutchings: Debian LTS work, May 2021

15 June, 2021 - 04:47

In May I was assigned 13.5 hours of work by Freexian's Debian LTS initiative and carried over 4.5 hours from earlier months. I worked 16 hours and will carry over the remainder.

I finished reviewing the futex code in the PREEMPT_RT patchset for Linux 4.9, and identified several places where it had been mis-merged with the recent futex security fixes. I sent a patch for these upstream, which was accepted and applied in v4.9.268-rt180.

I have continued updating the Linux 4.9 package to later upstream stable versions, and backported some missing security fixes. I have still not made a new upload, but intend to do so this week.

Jonathan Dowland: Opinionated IkiWiki v1

15 June, 2021 - 03:58

It's been more than a year since I wrote about Opinionated IkiWiki, a pre-configured, containerized deployment of Ikiwiki with opinions. My intention was to make something that is easy to get up and running if you are more experienced with containers than IkiWiki.

I haven't yet switched to Opinionated IkiWiki for this site, but that was my goal, and I think it's mature enough now that I can migrate over at some point, so it seems a good time to call it Version 1.0. I have been using it for my own private PIM systems for a while now.

You can pull built images from, here: The source lives here: A description of some of the changes made to the IkiWiki version lives here:

Enrico Zini: Pipelining

14 June, 2021 - 22:40

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Running actions on a server is nice, but a network round trip for each action is not very efficient. If I need to run a linear sequence of actions, I can stream them all to the server, and then read replies streamed from the server as they get executed.

This technique is called pipelining and one can see it used, for example, in Redis, or Mitogen.


Ansible has the concept of "Roles" as a series of related tasks: I'll play with that. Here's an example role to install and setup fail2ban:

class Role(role.Role):
    def main(self):

                enabled = true
                enabled = true
        ), name="configure fail2ban")

I prototyped roles as classes, with methods that push actions down the pipeline. If an action fails, all further actions for the same role won't executed, and will be marked as skipped.

Since skipping is applied per-role, it means that I can blissfully stream actions for multiple roles to the server down the same pipe, and errors in one role will stop executing that role and not others. Potentially I can get multiple roles going with a single network round-trip:


import sys
from transilience.system import Mitogen
from transilience.runner import Runner

def main():
    system = Mitogen("my server", "ssh", hostname="", username="root")

    runner = Runner(system)

    # Send roles to the server

    # Run until all roles are done

if __name__ == "__main__":

That looks like a playbook, using Python as glue rather than YAML.

Decision making in roles

Besides filing a series of actions, a role may need to take decisions based on the results of previous actions, or on facts discovered from the server. In that case, we need to wait until the results we need come back from the server, and then decide if we're done or if we want to send more actions down the pipe.

Here's an example role that installs and configures Prosody:

from transilience import actions, role
from transilience.actions import builtin
from .handlers import RestartProsody

class Role(role.Role):
    Set up prosody XMPP server
    def main(self):
        self.add(actions.facts.Platform(), then=self.have_facts)

            name=["certbot", "python-certbot-apache"],
        ), name="install support packages")

            name=["prosody", "prosody-modules", "lua-sec", "lua-event", "lua-dbi-sqlite3"],
        ), name="install prosody packages")

    def have_facts(self, facts):
        facts = facts.facts  # Malkovich Malkovich Malkovich!

        domain = facts["domain"]
        ctx = {
            "ansible_domain": domain

            argv=["certbot", "certonly", "-d", f"chat.{domain}", "-n", "--apache"],
        ), name="obtain chat certificate")

        with self.notify(RestartProsody):
                content=self.template_engine.render_file("roles/prosody/templates/prosody.cfg.lua", ctx),
            ), name="write prosody configuration")

            ), name="write prosody firewall")

    # ...

This files some general actions down the pipe, with a hook that says: when the results of this action come back, run self.have_facts().

At that point, the role can use the results to build certbot command lines, render prosody's configuration from Jinja2 templates, and use the results to file further action down the pipe.

Note that this way, while the server is potentially still busy installing prosody, we're already streaming prosody's configuration to it.

If anything goes wrong with the installation of prosody's package, the role will be marked as failed and all further actions of the same role, even those filed by have_facts() will be skipped.

Notify and handlers

In the previous example self.notify() also appears: that's my attempt to model the equivalent of Ansible's handlers. If any of the actions inside the with produce changes, then the RestartProsody role will be executed, potentially filing more actions ad the end of the playbook.

The runner will take care of collecting all the triggered role classes in a set, which discards duplicates, and then running the main() method of all resulting roles, which will cause more actions to be filed down the pipe.

Action conditions

Sometimes some actions are only meaningful as consequences of other actions. Let's take, for example, enabling buster-backports as an extra apt source:

        a = self.add(builtin.copy(
            content="deb [arch=amd64] buster-backports main contrib",
        ), name="enable backports")

        ), name="update after enabling backports",
           # Run only if the previous copy changed anything
           when={a: ResultState.CHANGED},

Here we want to update Apt's cache, which is a slow operation, only after we actually write /etc/apt/sources.list.d/debian-buster-backports.list. If the file was already there from a previous run, we can skip downloading the new package lists.

The when= attributes adds an annotation to the action that is sent town the pipeline, that says that it should only be run if the state of a previous action matches the given one.

In this case, when on the remote it's the turn of "update after enabling backports", it gets skipped unless the state of the previous "enable backports" action is CHANGED.

Effects of pipelining

I ported enough of Ansible's modules to be able to run the provisioning scripts of my VPS entirely via ansible.

This is the playbook run as plain Ansible:

$ time ansible-playbook vps.yaml
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

real    2m10.072s
user    0m33.149s
sys 0m10.379s

This is the same playbook run with Ansible speeded up via the Mitogen backend, which makes Ansible more bearable:

$ export ANSIBLE_STRATEGY=mitogen_linear
$ time ansible-playbook vps.yaml
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

real    0m24.428s
user    0m8.479s
sys 0m1.894s

This is the same playbook ported to Transilience:

$ time ./provision
real    0m2.585s
user    0m0.659s
sys 0m0.034s

Doing nothing went from 2 minutes down to 3 seconds!

That's the kind of running time that finally makes me comfortable with maintaining my VPS by editing the playbook only, and never logging in to mess with the system configuration by hand!

Next steps

I'm quite happy with what I have: I can now maintain my VPS with a simple script with quick iterative cycles.

I might use it to develop new playbooks, and port them to ansible only when they're tested and need to be shared with infrastructure that needs to rely on something more solid and battle tested than a prototype provisioning system.

I might also keep working on it as I have more interesting ideas that I'd like to try. I feel like Ansible reached some architectural limits that are hard to overcome without a major redesign, and are in many way hardcoded in its playbook configuration. It's nice to be able to try out new designs without that baggage.

I'd love it if even just the library of Transilience actions could grow, and gain widespread use. Ansible modules standardized a set of management operations, that I think became the way people think about system management, and should really be broadly available outside of Ansible.

If you are interesting in playing with Transilience, such as:

  • porting more Ansible modules to Transilience actions
  • improving the command line interface
  • test other ways to feed actions to pipelines
  • test other pipeline primitives
  • add backends besides Local and Mitogen
  • prototype a parser to turn a subsets of YAML playbook syntax into transilience actions
  • adopt it into your multinational organization infrastructure to speed up provisioning times by orders of magnitude at the cost of the development time that it takes to turn this prototype into something solid and road tested
  • create a startup and get millions in venture capital to disrupt the provisioning ecosystem

do get in touch or send a pull request! :)

Enrico Zini: Use ansible actions in a script

14 June, 2021 - 21:35

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

I like many of the modules provided with Ansible: they are convenient, platform-independent implementations of common provisioning steps. They'd be fantastic to have in a library that I could use in normal programs.

This doesn't look easy to do with Ansible code as it is. Also, the code quality of various Ansible modules doesn't fit something I'd want in a standard library of cross-platform provisioning functions.

Modeling Actions

I want to keep the declarative, idempotent aspect of describing actions on a system. A good place to start could be a hierarchy of dataclasses that hold the same parameters as ansible modules, plus a run() method that performs the action:

class Action:
    Base class for all action implementations.

    An Action is the equivalent of an ansible module: a declarative
    representation of an idempotent operation on a system.

    An Action can be run immediately, or serialized, sent to a remote system,
    run, and sent back with its results.
    uuid: str = field(default_factory=lambda: str(uuid.uuid4()))
    result: Result = field(default_factory=Result)

    def summary(self):
        Return a short text description of this action
        return self.__class__.__name__

    def run(self, system: transilience.system.System):
        Perform the action
        self.result.state = ResultState.NOOP

I like that Ansible tasks have names, and I hate having to give names to trivial tasks like "Create directory /foo/bar", so I added a summary() method so that trivial tasks like that can take care of naming themselves.

Dataclasses allow to introspect fields and annotate them with extra metadata, and together with docstrings, I can make actions reasonably self-documeting.

I ported some of Ansible's modules over: see complete list in the git repository.

Running Actions in a script

With a bit of glue code I can now run Ansible-style functions from a plain Python script:


from transilience.runner import Script

script = Script()

for i in range(10):
    script.builtin.file(state="touch", path=f"/tmp/test{i}")
Running Actions remotely

Dataclasses have an asdict function that makes them trivially serializable. If their members stick to data types that can be serialized with Mitogen and the run implementation doesn't use non-pure, non-stdlib Python modules, then I can trivially run actions on all sorts of remote systems using Mitogen:


from transilience.runner import Script
from transilience.system import Mitogen

script = Script(system=Mitogen("my server", "ssh", hostname="", username="user"))

for i in range(10):
    script.builtin.file(state="touch", path=f"/tmp/test{i}")

How fast would that be, compared to Ansible?

$ time ansible-playbook test.yaml
real    0m15.232s
user    0m4.033s
sys 0m1.336s

$ time ./test_script

real    0m4.934s
user    0m0.547s
sys 0m0.049s

With a network round-trip for each single operation I'm already 3x faster than Ansible, and it can run on nspawn containers, too!

Sweet! Next step, pipelining.

Enrico Zini: My gripes with Ansible

14 June, 2021 - 21:30

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Musing about Ansible

I like infrastructure as code.

I like to be able to represent an entire system as text files in a git repositories, and to be able to use that to recreate the system, from my Virtual Private Server, to my print server and my stereo, to build machines, to other kind of systems I might end up setting up.

I like that the provisioning work I do on a machine can be self-documenting and replicable at will.

The good

For that I quite like Ansible, in principle: simple (in theory) YAML files describe a system in (reasonably) high-level steps, and it can be run on (almost) any machine that happens to jave a simple Python interpreter installed.

I also like many of the modules provided with Ansible: they are convenient, platform-independent implementations of common provisioning steps. They'd be fantastic to have in a library that I could use in normal programs.

The bad

Unfortunately, Ansible is slow. Running the playbook on my VPS takes about 3 whole minutes even if I'm just changing a line in a configuration file.

This means that most of the time, instead of changing that line in the playbook and running it, to then figure out after 3 minutes that it was the wrong line, or I made a spelling mistake in the playbook, I end up logging into the server and editing in place.

That defeats the whole purpose, but that level of latency between iterations is just unacceptable to me.

The ugly

I also think that Ansible has outgrown its original design, and the supposedly declarative, idempotent YAML has become a full declarative scripting language in disguise, whose syntax is extremely awkward and verbose.

If I'm writing declarative descriptions, YAML is great. If I'm writing loops and conditionals, I want to write code, not templated YAML.

I also keep struggling trying to use Ansible to provision chroots and nspawn containers.

A personal experiment: Transilience

There's another thing I like in Ansible: it's written in Python, which is a language I'm comfortable with. Compared to other platforms, it's one that I'm more likely to be able to control beyond being a simple user.

What if I can port Ansible modules into a library of high-level provisioning functions, that I can just run via normal Python scripts?

What if I can find a way to execute those scripts remotely and not just locally?

I've started writing some prototype code, and the biggest problem is, of course, finding a name.

Ansible comes from Ursula K. Le Guin's Hainish Cycle novels, where it is a device that allows its users to communicate near-instantaneously over interstellar distances. Traveling, however, is still constrained by the speed of light.

Later in the same universe, the novels A Fisherman of the Inland Sea and The Shobies' Story, talk about experiments with instantaneous interstellar travel, as a science Ursula Le Guin called transilience:

Transilience: n. A leap across or from one thing to another [1913 Webster]

Transilience. I like everything about this name.

Now that the hardest problem is solved, the rest is just a simple matter of implementation details.

François Marier: Self-hosting an Ikiwiki blog

14 June, 2021 - 13:18

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies

Here are all of the extra Debian packages I had to install on my server:

apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl

Then I enabled the CGI module in Apache:

a2enmod cgi

and un-commented the following in /etc/apache2/mods-available/mime.conf:

AddHandler cgi-script .cgi
Creating a separate user account

Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:

adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog
git setup

Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):

git clone --bare git:// source.git

Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work.

Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:

git branch -d setup
git remote rm origin

Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:

cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud

I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop.

Finaly, I generated a new ssh key without a passphrase:

ssh-keygen -t ed25519

and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config

While I started with the Branchable setup file, I changed the following things in it:

srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- recentchangesdiff
- attachment
- remove
- rename
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- recentchanges
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- repolist
- toggle
- txt
sslcookie: 1
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
allowed_attachments: admin()

Then I created the destdir:

mkdir /var/www/blog
chown blog:blog /var/www/blog

and generated the initial copy of the blog as the blog user:

ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild

One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config

Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/
    SSLCertificateKeyFile /etc/letsencrypt/live/

    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"

    Include /etc/fmarier-org/blog-common

<VirtualHost *:443>

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/
    SSLCertificateKeyFile /etc/letsencrypt/live/

    Redirect permanent /

<VirtualHost *:80>

    Redirect permanent /

and the common config I put in /etc/fmarier-org/blog-common:


DocumentRoot /var/www/blog

LogLevel core:info
CustomLog ${APACHE_LOG_DIR}/blog-access.log combined
ErrorLog ${APACHE_LOG_DIR}/blog-error.log

AddType application/rss+xml .rss

<Location /blog.cgi>
        Options +ExecCGI

before enabling all of this using:

a2ensite blog
apache2ctl configtest
systemctl restart apache2.service

The domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements

Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served.

First of all, I enabled the HTTP/2 and Brotli modules:

a2enmod http2
a2enmod brotli

and enabled Brotli compression by putting the following in /etc/apache2/conf-available/francois.conf:

<IfModule mod_brotli.c>
    AddOutputFilterByType BROTLI_COMPRESS text/html text/plain text/xml text/css text/javascript application/javascript
    BrotliCompressionQuality 4

Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion

    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion%{REQUEST_URI}s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'

<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common

Then I followed the Mozilla Observatory recommendations and enabled the following security headers:

Header set Content-Security-Policy: "default-src 'none'; report-uri ; style-src 'self' 'unsafe-inline' ; img-src 'self' ; script-src https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"

Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure.

I also used the Mozilla TLS config generator to improve the TLS config for my server.

Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:

Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt

I also followed these instructions to create a sitemap for my blog with the following alias:

Alias /sitemap.xml /var/www/blog/sitemap/index.rss

Finally, I simplified a few error pages to save bandwidth:

ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"
Monitoring 404s

Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:


Based on that, I added a few redirects to point bots and users to the location of my RSS feed:

Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss

and to tell them to stop trying to fetch obsolete resources:

Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi

I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using

Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom

I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:

User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi
Future improvements

There are a few things I'd like to improve on my current setup.

The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for.

Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:

    advertisePushOptions = true
    certNonceSeed = "(random string)"

but I'd like to also reject unsigned pushes.

While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Patches for this would be very welcome of course.

Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:

[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom

This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

Sergio Durigan Junior: I am not on Freenode anymore

14 June, 2021 - 11:00

This is a quick public announcement to say that I am not on the Freenode IRC network anymore. My nickname (sergiodj), which was more than a decade old, has just been deleted (along with every other nickname and channel in that network) from their database today, 2021-06-14.

For your safety, you should assume that everybody you knew at Freenode is not there either, even if you see their nicknames online. Do not trust without verifying. In fact, I would strongly encourage that you do not join Freenode anymore: their new policies are absolutely questionable and their disregard for their users is blatant.

If you would like to chat with me, you can find me at OFTC (preferred) and Libera.

Vincent Fourmond: Solution for QSoas quiz #2: averaging several Y values for the same X value

14 June, 2021 - 04:14
This post describes two similar solutions to the Quiz #2, using the data files found there. The two solutions described here rely on split-on-values. The first solution is the one that came naturally to me, and is by far the most general and extensible, but the second one is shorter, and doesn't require external script files.
Solution #1 The key to both solution is to separate the original data into a series of datasets that only contain data at a fixed value of x (which corresponds here to a fixed pH), and then process each dataset one by one to extract the average and standard deviation. This first step is done thus:
QSoas> load kcat-vs-ph.dat
QSoas> split-on-values pH x /flags=data
After these commands, the stacks contains a series of datasets bearing the data flag, that each contain a single column of data, as can be seen from the beginnings of a show-stack command:
QSoas> k
Normal stack:
	 F  C	Rows	Segs	Name	
#0	(*) 1	43	1	'kcat-vs-ph_subset_22.dat'
#1	(*) 1	44	1	'kcat-vs-ph_subset_21.dat'
#2	(*) 1	43	1	'kcat-vs-ph_subset_20.dat'
Each of these datasets have a meta-data named pH whose value is the original x value from kcat-vs-ph.dat. Now, the idea is to run a stats command on the resulting datasets, extracting the average value of x and its standard deviation, together with the value of the meta pH. The most natural and general way to do this is to use run-for-datasets, using the following script file (named process-one.cmds):
stats /meta=pH /output=true /stats=x_average,x_stddev
So the command looks like:
QSoas> run-for-datasets process-one.cmds flagged:data
This command produces an output file containing, for each flagged dataset, a line containing x_average, x_stddev, and pH. Then, it is just a matter of loading the output file and shuffling the columns in the right order to get the data in the form asked. Overall, this looks like this:
l kcat-vs-ph.dat
split-on-values pH x /flags=data
output result.dat /overwrite=true
run-for-datasets process-one.cmds flagged:data
l result.dat
apply-formula tmp=y2;y2=y;y=x;x=tmp
dataset-options /yerrors=y2
The slight improvement over what is described above is the use of the output command to write the output to a dedicated file (here result.dat), instead of out.dat and ensuring it is overwritten, so that no data remains from previous runs.

Solution #2 The second solution is almost the same as the first one, with two improvements:
  • the stats command can work with datasets other than the current one, by supplying them to the /buffers= option, so that it is not necessary to use run-for-datasets;
  • the use of the output file can by replaced by the use of the accumulator.
This yields the following, smaller, solution:
l kcat-vs-ph.dat
split-on-values pH x /flags=data
stats /meta=pH /accumulate=* /stats=x_average,x_stddev /buffers=flagged:data
apply-formula tmp=y2;y2=y;y=x;x=tmp
dataset-options /yerrors=y2

About QSoas QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

Norbert Preining: Future of Cinnamon in Debian

12 June, 2021 - 20:19

OK, this is not an easy post. I have been maintaining Cinnamon in Debian for quite some time, since around the times version 4 came out. The soon (hahaha) to be released Bullseye will carry the last release of the 4-track, but version 5 is already waiting, After Bullseye, the future of Cinnamon in Debian currently looks bleak.

Since my switch to KDE/Plasma, I haven’t used Cinnamon in months. Only occasionally I tested new releases, but never gave them a real-world test. Having left Gnome3 for it’s complete lack of usability for pro-users, I escaped to Cinnamon and found a good home there for quite some time – using modern technology but keeping user interface changes conservative. For long time I haven’t even contemplated using KDE, having been burned during the bad days of KDE3/4 when bloat-as-bloat-can-be was the best description.

What revelation it was that KDE/Plasma was more lightweight, faster, responsive, integrated, customizable, all in all simple great. Since my switch to KDE/Plasma I think not for a second I have missed anything from the Gnome3 or Cinnamon world.

And that means, I will most probably NOT packaging Cinnamon 5, nor do any real packaging work of Cinnamon for Debian in the future. Of course, I will try to keep maintenance of the current set of packages for Bullseye, but for the next release, I think it is time that someone new steps in. Cinnamon packaging taught me a lot on how to deal with multiple related packages, which is of great use in the KDE packaging world.

If someone steps forward, I will surely be around for support and help, but as long as nobody takes the banner, it will mean the end of Cinnamon in Debian.

Please contact me if you are interested!

Kentaro Hayashi: has moved to Team Infrastructure

12 June, 2021 - 19:00

Today, has moved to Team Infrastructure

So far, was sponsored by FOSSHOST which provides us a VPS instance since Jan, 2021. It was located at OSU Open Source Lab. It worked pretty well, Thanks FOSSHOST sponsorship since ever!

Now, uses the VPS instance which is provided by Team Infrastructure. (still non-DSA managed) It is hosted at HETZNER Cloud.

About is a experimental service to demonstrate how to improve user experience with finding and fixing Debian unstable related bugs for making "unstable life" comfortable.

Thank Team for sponsoring,

Junichi Uekawa: Wrote a quick hack to open chroot in emacs tramp.

12 June, 2021 - 15:32
Wrote a quick hack to open chroot in emacs tramp. I wrote a mode for cros_sdk and it was relatively simple. I figured that chroot must be easier. I could write one in about 30 minutes. I need to mount proc and home inside the chroot to make it useful, but here goes. chroot-tramp.el

Mike Gabriel: New Debian Packaging Team: BBB Packaging Team (and Kurento Media Server goes Debian)

12 June, 2021 - 04:35

Today, Fre(i)e Software GmbH has been contracted for packaging Kurento Media Server for Debian. This packaging project will be funded by GUUG e.V. (the German Unix User Group e.V.). A big thanks to the people from GUUG e.V. for making this packaging project possible.

About Kurento Media Server

Kurento is an open source software project providing a platform suitable for creating modular applications with advanced real-time communication capabilities. For knowing more about Kurento, please visit the Kurento project website:

Kurento is part of FIWARE. For further information on the relationship of FIWARE and Kurento check the Kurento FIWARE Catalog Entry. Kurento is also part of the NUBOMEDIA research initiative.

Kurento Media Server is a WebRTC-compatible server that processes audio and video streams, doing composable pipeline-based processing of media.

About BigBlueButton

As some of you may know, Kurento Media Server is one of the core components of the BigBlueButton software, an ,,Open Source Virtual Classroom Software''.

The context of the KMS funding is - after several other steps - getting the complete software component stack of BigBlueButton (aka BBB) into Debian some day, so that we can provide BBB as native Debian packages. On Debian. (Currently, one needs to use an always already a bit outdated version of Ubuntu).

Due to this greater context, I just created the Debian BBB Packaging Team on

Outlook and Appreciation

The current project (uploading Kurento Media Server to Debian) will very likely be extended to one year of package maintenance for all Kurento Media Server components in Debian. Extending this maintenance funding to a second year, has also been discussed, and seems a possible option.

Probably most Debian Developer colleagues will agree with me when I say that Debian packaging is not a one-time shot until the first uploads of software packages have landed and settled. Debian package maintenance is a long term responsibility and requires long term commitment. I am very glad, that the people at GUUG e.V are on the same page with me (with us) regarding this. This is much and dearly appreciated. Thank you!!!

What else?

Well, we have also talked about another BigBlueButton component that is not yet in Debian: FreeSwitch. But more of that, when time has come.

How to Join the Debian BBB Packaging Team?

Please ping me via IRC (sunweaver on OFTC IRC) or [matrix] (

How to Support the Debian BBB Packaging Team?

If you, your organization, your company, your municipality, your university, etc. feels like supporting the effort of packaging BigBlueButton for Debian, please get in touch with:

And yes, the company homepage is not online, yet, but it is in the makings...

Mike (aka sunweaver)

Lisandro Damián Nicanor Pérez Meyer: Firsts steps into QML

12 June, 2021 - 01:15

After years of using and maintaining Qt there was a piece of the SDK that I never got to use as a developer: QML. Thanks to ICS I've took the free (in the sense of cost) QML Programming — Fundamentals and Beyond.

It consists of seven sessions, which can be easily done in a few days. I did them all in 4 days, but with enough time available you can do them even faster. Of course some previous knowledge of Qt is of course useful.

The only drawback was the need of a corporate e-mail in order to register (or at least the webpage says so). Apart from that it is really worth the effort. So, if you are planning into getting into QML this is definitely a nice way to start.

Colin Watson: SSH quoting

11 June, 2021 - 17:22

A while back there was a thread on one of our company mailing lists about SSH quoting, and I posted a long answer to it. Since then a few people have asked me questions that caused me to reach for it, so I thought it might be helpful if I were to anonymize the original question and post my answer here.

The question was why a sequence of commands involving ssh and fiddly quoting produced the output they did. The first example was this:

$ ssh user@machine.local bash -lc "cd /tmp;pwd"

Oh hi, my dubious life choices have been such that this is my specialist subject!

This is because SSH command-line parsing is not quite what you expect.

First, recall that your local shell will apply its usual parsing, and the actual OS-level execution of ssh will be like this:

[0]: ssh
[1]: user@machine.local
[2]: bash
[3]: -lc
[4]: cd /tmp;pwd

Now, the SSH wire protocol only takes a single string as the command, with the expectation that it should be passed to a shell by the remote end. The OpenSSH client deals with this by taking all its arguments after things like options and the target, which in this case are:

[0]: bash
[1]: -lc
[2]: cd /tmp;pwd

It then joins them with a single space:

bash -lc cd /tmp;pwd

This is passed as a string to the server, which then passes that entire string to a shell for evaluation, so as if you’d typed this directly on the server:

sh -c 'bash -lc cd /tmp;pwd'

The shell then parses this as two commands:

bash -lc cd /tmp

The directory change thus happens in a subshell (actually it doesn’t quite even do that, because bash -lc cd /tmp in fact ends up just calling cd because of the way bash -c parses multiple arguments), and then that subshell exits, then pwd is called in the outer shell which still has the original working directory.

The second example was this:

$ ssh user@machine.local bash -lc "pwd;cd /tmp;pwd"

Following the logic above, this ends up as if you’d run this on the server:

sh -c 'bash -lc pwd; cd /tmp; pwd'

The third example was this:

$ ssh user@machine.local bash -lc "cd /tmp;cd /tmp;pwd"

And this is as if you’d run:

sh -c 'bash -lc cd /tmp; cd /tmp; pwd'

Now, I wouldn’t have implemented the SSH client this way, because I agree that it’s confusing. But /usr/bin/ssh is used as a transport for other things so much that changing its behaviour now would be enormously disruptive, so it’s probably impossible to fix. (I have occasionally agitated on openssh-unix-dev@ for at least documenting this better, but haven’t made much headway yet; I need to get round to preparing a documentation patch.) Once you know about it you can use the proper quoting, though. In this case that would simply be:

ssh user@machine.local 'cd /tmp;pwd'

Or if you do need to specifically invoke bash -l there for some reason (I’m assuming that the original example was reduced from something more complicated), then you can minimise your confusion by passing the whole thing as a single string in the form you want the remote sh -c to see, in a way that ensures that the quotes are preserved and sent to the server rather than being removed by your local shell:

ssh user@machine.local 'bash -lc "cd /tmp;pwd"'

Shell parsing is hard.

Mike Gabriel: Linux on Acer Spin 3

11 June, 2021 - 13:31

Recently, I bought an Acer Spin 3 Convertible Notebook for the company and provided it to Robert Tari for his daily work on Ayatana Indicators (which currently is funded by the UBports Foundation via my company Fre(i)e Software GmbH).

Some days ago Robert reported back about a sleepless night he spent with that machine... He got stuck with a tricky issue regarding the installation of Manjaro GNU/Linux on that machine, that could be -- at the end -- resolved by a not so well documented trick.

Before anyone else spends another sleepless night on this, we thought we'd better share Robert's solution.

So, the below applies to the Acer Spin 3 series (and probably to other Spin models, perhaps even some other Acer laptops):

Acer Spin 3 Pre-Inst Cheat Codes

Before you even plug in the USB install media:

  1. Go to UEFI settings (i.e. BIOS for us elderly people) [F2]
  2. Security -> Set Supervisor Password [Enabled]
  3. Enter the password you'll use
  4. Boot -> Secure Boot -> [Disabled] (you can't disable it without a set supervisor password)
  5. Exit -> Exit Saving Changes
  6. Restart and go to UEFI settings again [F2]
  7. Main -> [Now press CTRL + S] -> VMD Controller -> [Disabled]
  8. Exit -> Exit Saving Changes
  9. Now plug in the install USB and restart

Esp. the disabling of the VMD Controller is essential. Otherwise, GRUB won't find any partition nor EFI registered boot items after the installation and drops into the EFI recovery shell.

Robert hasn't tested the Wacom pen that comes with the device, nor the fingerprint reader, yet.

Everything else works out-of-the-box.

Mike Gabriel (aka sunweaver)

Petter Reinholdtsen: Nikita version 0.6 released - free software archive API server

10 June, 2021 - 22:10

I am very pleased to be able to share with you the announcement of a new version of the archiving system Nikita published by its lead developer Thomas Sødring:

It is with great pleasure that we can announce a new release of nikita. Version 0.6 ( This release makes new record keeping functionality available. This really is a maturity release. Both in terms of functionality but also code. Considerable effort has gone into refactoring the codebase and simplifying the code. Notable changes for this release include:

  • Significantly improved OData parsing
  • Support for business specific metadata and national identifiers
  • Continued implementation of domain model and endpoints
  • Improved testing
  • Ability to export and import from arkivstruktur.xml

We are currently in the process of reaching an agreement with an archive institution to publish their picture archive using nikita with business specific metadata and we hope that we can share this with you soon. This is an interesting project as it allows the organisation to bring an older picture archive back to life while using the original metadata values stored as business specific metadata. Combined with OData means the scope and use of the archive is significantly increased and will showcase both the flexibility and power of Noark.

I really think we are approaching a version 1.0 of nikita, even though there is still a lot of work to be done. The notable work at the moment is to implement access-control and full text indexing of documents.

My sincere thanks to everyone who has contributed to this release!

- Thomas

Release 0.6 2021-06-10 (d1ba5fc7e8bad0cfdce45ac20354b19d10ebbc7b)

  • Refactor metadata entity search
  • Remove redundant security configuration
  • Make OpenAPI documentation work
  • Change database structure / inheritance model to a more sensible approach
  • Make it possible to move entities around the fonds structure
  • Implemented a number of missing endpoints
  • Make sure yml files are in sync
  • Implemented/finalised storing and use of
    • Business Specific Metadata
    • Norwegian National Identifiers
    • Cross Reference
    • Keyword
    • StorageLocation
    • Author
    • Screening for relevant objects
    • ChangeLog
    • EventLog
  • Make generation of updated docker image part of successful CI pipeline
  • Implement pagination for all list requests
    • Refactor code to support lists
    • Refactor code for readability
    • Standardise the controller/service code
  • Finalise File->CaseFile expansion and Record->registryEntry/recordNote expansion
  • Improved Continuous Integration (CI) approach via gitlab
  • Changed conversion approach to generate tagged PDF documents
  • Updated dependencies
    • For security reasons
    • Brought codebase to spring-boot version 2.5.0
    • Remove import of necessary dependencies
    • Remove non-used metrics classes
  • Added new analysis to CI including
  • Implemented storing of Keyword
  • Implemented storing of Screening and ScreeningMetadata
  • Improved OData support
    • Better support for inheritance in queries where applicable
    • Brought in more OData tests
    • Improved OData/hibernate understanding of queries
    • Implement $count, $orderby
    • Finalise $top and $skip
    • Make sure & is used between query parameters
  • Improved Testing in codebase
    • A new approach for integration tests to make test more readable
    • Introduce tests in parallel with code development for TDD approach
    • Remove test that required particular access to storage
  • Implement case-handling process from received email to case-handler
    • Develop required GUI elements (digital postroom from email)
    • Introduced leader, quality control and postroom roles
  • Make PUT requests return 200 OK not 201 CREATED
  • Make DELETE requests return 204 NO CONTENT not 200 OK
  • Replaced 'oppdatert*' with 'endret*' everywhere to match latest spec
  • Upgrade Gitlab CI to use python > 3 for CI scripts
  • Bug fixes
    • Fix missing ALLOW
    • Fix reading of objects from jar file during start-up
    • Reduce the number of warnings in the codebase
    • Fix delete problems
    • Make better use of cascade for "leaf" objects
    • Add missing annotations where relevant
    • Remove the use of ETAG for delete
    • Fix missing/wrong/broken rels discovered by runtest
    • Drop unofficial convertFil (konverterFil) end point
    • Fix regex problem for dateTime
    • Fix multiple static analysis issues discovered by coverity
    • Fix proxy problem when looking for object class names
    • Add many missing translated Norwegian to English (internal) attribute/entity names
    • Change UUID generation approach to allow code also set a value
    • Fix problem with Part/PartParson
    • Fix problem with empty OData search results
    • Fix metadata entity domain problem
  • General Improvements
    • Makes future refactoring easier as coupling is reduced
    • Allow some constant variables to be set from property file
    • Refactor code to make reflection work better across codebase
    • Reduce the number of @Service layer classes used in @Controller classes
    • Be more consistent on naming of similar variable types
    • Start printing rels/href if they are applicable
    • Cleaner / standardised approach to deleting objects
    • Avoid concatenation when using StringBuilder
    • Consolidate code to avoid duplication
    • Tidy formatting for a more consistent reading style across similar class files
    • Make throw a log.error message not an message
    • Make throw print the log value rather than printing in multiple places
    • Add some missing pronom codes
    • Fix time formatting issue in Gitlab CI
    • Remove stale / unused code
    • Use only UUID datatype rather than combination String/UUID for systemID
    • Mark variables final and @NotNull where relevant to indicate intention
  • Change Date values to DateTime to maintain compliance with Noark 5 standard
  • Domain model improvements using Hypersistence Optimizer
    • Move @Transactional from class to methods to avoid borrowing the JDBC Connection unnecessarily
    • Fix OneToOne performance issues
    • Fix ManyToMany performance issues
    • Add missing bidirectional synchronization support
    • Fix ManyToMany performance issue
  • Make List
  • and Set
  • use final-keyword to avoid potential problems during update operations
  • Changed internal URLs, replaced "hateoas-api" with "api".
  • Implemented storing of Precedence.
  • Corrected handling of screening.
  • Corrected _links collection returned for list of mixed entity types to match the specific entity.
  • Improved several internal structures.

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้