Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 41 min 5 sec ago

Bits from Debian: Debian Artwork: Call for Proposals for Debian 10 (Buster)

17 June, 2018 - 18:30

This is the official call for artwork proposals for the Buster cycle.

For the most up to date details, please refer to the wiki.

We would also like to take this opportunity to thank Juliette Taka Belin for doing the Softwaves theme for stretch.

The deadlines for submissions is: 2018-09-05

The artwork is usually picked based on which themes look the most:

  • ''Debian'': admittedly not the most defined concept, since everyone has their own take on what Debian means to them.
  • ''plausible to integrate without patching core software'': as much as we love some of the insanely hot looking themes, some would require heavy GTK+ theming and patching GDM/GNOME.
  • ''clean / well designed'': without becoming something that gets annoying to look at a year down the road. Examples of good themes include Joy, Lines and Softwaves.

If you'd like more information, please use the Debian Desktop mailing list.

Arturo Borrero González: Netfilter Workshop 2018 Berlin summary

17 June, 2018 - 00:28

This weekend we had Netfilter Workshop 2018 in Berlin, Germany.

Lots of interesting talks happened, mostly surrounding nftables and how to move forward from the iptables legacy world to the new, modern nft framework.

In a nutshell, the Netfilter project, the FLOSS community driven project, has agreed to consider iptables as a legacy tool. This confidence comes from the maturity of the nftables framework, which is fairly fully-compliant with the old iptables API, including extensions (matches and targets).

Starting now, next iptables upstream releases will include the old iptables binary as /sbin/iptables-legacy, and the same for the other friends.

To summarize:

  • /sbin/iptables-legacy
  • /sbin/iptables-legacy-save
  • /sbin/iptables-legacy-restore
  • /sbin/ip6tables-legacy
  • /sbin/ip6tables-legacy-save
  • /sbin/ip6tables-legacy-restore
  • /sbin/arptables-legacy
  • /sbin/ebtables-legacy

The new binary will be using the nf_tables kernel backend instead, what was formely known as ‘iptables-compat’. Should you find some rough edges with the new binary, you could always use the old -legacy tools. This is for people who want to keep using the old iptables semantics, but the recommendation is to migrate to nftables as soon as possible.

Moving to nftables will add the benefits of improved performance, new features, new semantics, and in general, a modern framework. All major distributions will implement these changes soon, including RedHat, Fedora, CentOS, Suse, Debian and derivatives. We also had some talks regarding firewalld, the firewalling service in use by some rpm-based distros. They gained support for nftables starting with v0.6.0. This is great news, since firewalld is the main firewalling top-level mechanism in these distributions. Good news is that the libnftables high level API is in great shape. It recently gained a new high level JSON API thanks to Phil Sutter. The firewalld tool will use this new JSON API soon.

I gave a talk about the status of Netfilter software packages at Debian, and shared my plans to implement these iptables -> nftables changes in the near future.

We also had an interesting talk by a CloudFlare engineer about how they use the TPROXY Netfilter infraestructure to serve thousand customers. Some discussion happened about caveats and improvements and how nftables could be a better fit if it gains TPROXY-like features. In the field of networking at scale, some vmware engineers also joined the conversation for nft connlimit and nf_conncount, a new approach in nftables for rate-limiting/policing based on conntrack data. This was followed up by a presentation by Pablo Neira about the new flow offload infrastructure for nftables, which can act as a complete kernel bypass in case of packet forwarding.

Jozsef Kadlecsik shared a deep and detailed investigation on ipset vs nftables and how we could match both frameworks. He gave an overview of what’s missing, what’s already there and what could be a benefit from users migrating from ipset to nftables.

We had some space for load-balancing as well. Laura García shared the last news regarding the nftlb project, the nftables-based load balancer. She shared some interesting numbers about how reptoline affects Netfilter performance. She mentioned that the impact of reptoline is about 17% in nftables and 40% for iptables for her use cases.

Florian Westphal gave a talk regarding br_netfilter and how we could improve the linux kernel networking stack from the Netfilter point of view for bridge use cases. Right now all sorts of nasty things are done to store required information and context for packets traveling bridges (which may need to be evaluated by Netfilter). We have a lot of marging for improvement and Florian’s plan is to invest time in these.

We had a very interesting legal talk by Dr. Till Jaeger regarding GPL enforcement in Germany, related to the Patrick McHardly situation. Some good work is being done in this field to defend the community against activities which hurts the interest of all the Linux users and developers.

Harsha Sharma, 18 years old from India, gave a talk explaining her work on nftables to the rest of Netfilter contributors. This is possible thanks to internship programs like Outreachy and Google Summer of Code. Varsha and Harsha, both are so brave for traveling so far from home to join a mostly european-white-men-only meeting. We where joined by 3 women this workshop and I would like to believe this is a symbol of our inclusiveness, of being a healthy community.

The workshop was sponsorized by vmware, zevenet, redhat, intra2net, oisf, stamus networks, and suricata.

Steve Kemp: Monkeying around with intepreters

16 June, 2018 - 17:15

Recently I've had an overwhelming desire to write a BASIC intepreter. I can't think why, but the idea popped into my mind, and wouldn't go away.

So I challenged myself to spend the weekend looking at it.

Writing an intepreter is pretty well-understood problem:

  • Parse the input into tokens, such as "LET", "GOTO", "INT:3"
    • This is called lexical analysis / lexing.
  • Taking those tokens and building an abstract syntax tree.
    • The AST
  • Walking the tree, evaluating as you go.
    • Hey ho.

Of course BASIC is annoying because a program is prefixed by line-numbers, for example:

 20 GOTO 10

The naive way of approaching this is to repeat the whole process for each line. So a program would consist of an array of input-strings each line being treated independently.

Anyway reminding myself of all this fun took a few hours, and during the course of that time I came across Writing an intepreter in Go which seems to be well-regarded. The book walks you through creating an interpreter for a language called "Monkey".

I found a bunch of implementations, which were nice and clean. So to give myself something to do I started by adding a new built-in function rnd(). Then I tested this:

let r = 0;
let c = 0;

for( r != 50 ) {
   let r = rnd();
   let c = c + 1;

puts "It took ";
puts c;
puts " attempts to find a random-number equalling 50!";

Unfortunately this crashed. It crashed inside the body of the loop, and it seemed that the projects I looked at each handled the let statement in a slightly-odd way - the statement wouldn't return a value, and would instead fall-through a case statement, hitting the next implementation.

For example in monkey-intepreter we see that happen in this section. (Notice how there's no return after the env.Set call?)

So I reported this as a meta-bug to the book author. It might be the master source is wrong, or might be that the unrelated individuals all made the same error - meaning the text is unclear.

Anyway the end result is I have a language, in go, that I think I understand and have been able to modify. Now I'll have to find some time to go back to BASIC-work.

I found a bunch of basic-intepreters, including ubasic, but unfortunately almost all of them were missing many many features - such as implementing operations like RND(), ABS(), COS().

Perhaps room for another interpreter after all!

Sven Hoexter: imagine you no longer own your infrastructure

16 June, 2018 - 02:04

Sounds crazy and nobody would ever do that, but just for a moment imagine you no longer own your infrastructure.

Imagine you just run your container on something like GKE with Kubernetes.

Imagine you build your software with something like Jenkins running in a container, using the GKE provided docker interface to build stuff in another container.

And for a $reason imagine you're not using the Google provided container registry, but your own one hosted somewhere else on the internet.

Of course you access your registry via HTTPS, so your connection is secured at the transport level.

Now imagine your certificate is at the end of its validity period. Like ending the next day.

Imagine you just do what you do every time that happens, and you just order a new certificate from one of the left over CAs like DigiCert.

You receive your certificate within 15 minutes.

You deploy it to your registry.

You validate that your certificate chain validates against different certificate stores.

The one shipped in ca-certificates on various Debian releases you run.

The one in your browser.

Maybe you even test it with Google Chrome.

Everything is cool and validates. I mean, of course it does. DigiCert is a known CA player and the root CA certificate was created five years ago. A lot of time for a CA to be included and shipped in many places.

But still there is one issue. The docker commands you run in your build jobs fail to pull images from your registry because the certificate can not be validated.

You take a look at the underlying OS and indeed it's not shipping the 5 year old root CA certificate that issued your intermediate CA that just issued your new server certificate.

If it were your own infrastructure you would now just ship the missing certificate.

Mabye by including it in your internal ca-certificates build.

Or by just deploying it with ansible to /usr/share/ca-certificates/myfoo/ and adding that to the configuration in /etc/ca-certificates.conf so update-ca-certificates can create the relevant hash links for you in /etc/ssl/certs/.

But this time it's not your infrastructure and you can not modify the operating system context your docker container are running in.

Sounds insane, right? Luckily we're just making up a crazy story and something like that would never happen in the real world, because we all insist on owning our infrastructure.

Sune Vuorela: Partially initialized objects

16 June, 2018 - 01:15

I found this construct some time ago. It took some reading to understand why it worked. I’m still not sure if it is actually legal, or just works only because m_derivedData is not accessed in Base::Base.

struct Base {
    std::string& m_derivedData;
    Base(std::string& data) : m_derivedData(data) {

struct Derived : public Base {
    std::string m_data;
    struct Derived() : Base(m_data), m_data("foo") {

Andrej Shadura: Working in open source: part 1

15 June, 2018 - 21:08

Three years ago on this day I joined Collabora to work on free software full-time. It still feels a bit like yesterday, despite so much time passing since then. In this post, I’m going to reconstruct the events of that year.

Back in 2015, I worked for Alcatel-Lucent, who had a branch in Bratislava. I can’t say I didn’t like my job — quite contrary, I found it quite exciting: I worked with mobile technologies such as 3G and LTE, I had really knowledgeable and smart colleagues, and it was the first ‘real’ job (not counting the small business my father and I ran) where using Linux for development was not only not frowned upon, but was a mandatory part of the standard workflow, and running it on your workstation was common too, even though not official.

However, after working for Alcatel-Lucent for a year, I found I don’t like some of the things about this job. We developed proprietary software for the routers and gateways the company produced, and despite the fact we used quite a lot of open source libraries and free software tools, we very rarely contributed anything back, and if this happened at all, it usually happened unofficially and not on the company’s time. Each time I tried to suggest we need to upstream our local changes so that we don’t have to maintain three different patchsets for different upstream versions ourselves, I was told I know nothing about how the business works, and that doing that would give up the control on the code, and we can’t do that. At the same time, we had no issue incorporating permissively-licensed free software code. The more I worked at Alcatel-Lucent, the more I felt I am just getting useless knowledge of a proprietary product I will never be able to reuse once and if I leave the company. At some point, in a discussion at work someone said that doing software development (including my free software work) even on my free time may constitute a conflict of interests, and the company may be unhappy about it. Add to that that despite relatively flexible hours, working from home was almost never allowed, as was working from other offices of the company.

These were the major reasons I quit my job at Alcatel-Lucent, and my last day was 10 April 2018. Luckily, we reached an agreement that I will still get my normal pay while on the notice period despite not actually going to the office or doing any work, which allowed me to enjoy two months of working on my hobby projects while not having to worry about money.

To be honest, I don’t want to seem like I quit my job just because it was all proprietary software, and I did plan to live from donations or something, it wasn’t quite like that. While still working for Alcatel-Lucent, I was offered a job which was developing real-time software running inside the Linux kernel. While I have declined this job offer, mostly because it was a small company with less than a dozen employees, and I would need to take over the responsibility for a huge piece of code — which was, in fact, also proprietary, this job offer taught me this thing: there were jobs out there where my knowledge of Linux was of an actual use, even in the city I lived in. The other thing I learnt was this: there were remote Linux jobs too, but I needed to become self-employed to be able to take them, since my immigration status at the moment didn’t allow me to be employed abroad.

The business license I received within a few days of quitting my job

Feeling free as a bird, having the business registered, I’ve spent two months hacking, relaxing, travelling to places in Slovakia and Ukraine, and thinking about how am I going to earn money when my two months vacation ends.

In Trenčín

The obvious idea was to consult, but that wouldn’t guarantee me constant income. I could consult on Debian or Linux in general, or on version control systems — in 2015 I was an active member of the Kallithea project and I believed I could help companies migrate from CVS and Subversion to Mercurial and Git hosted internally on Kallithea. (I’ve actually also got a job offer from Unity Technologies to hack on Kallithea and related tools, but I had to decline it since it would require moving to Copenhagen, which I wasn’t ready for, despite liking the place when I visited them in May 2015.)

Another obvious idea was working for Red Hat, but knowing how slow their HR department was, I didn’t put too much hope into it. Besides, when I contacted them, they said they need to get an approval for me to work for them remotely and as a self-employed, lowering my chances on getting a job there without having to relocate to Brno or elsewhere.

At some point, reading Debian Planet, I found a blog post by Simon McVittie on polkit, in which he mentioned Collabora. Soon, I applied, had my interviews and a job offer.

To be continued later today…

Steinar H. Gunderson: Qt flag types

15 June, 2018 - 15:00

typeid(Qt::AlignRight) = Qt::AlignmentFlag (implicitly convertible to QVariant
typeid(Qt::AlignRight | Qt::AlignVCenter) = QFlags<Qt::AlignmentFlag> (not implicitly convertible to QVariant)
typeid(Qt::AlignRight + Qt::AlignVCenter) = int (implicitly convertible to QVariant)

Qt, what is wrong with you?

Daniel Pocock: The questions you really want FSFE to answer

15 June, 2018 - 14:28

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

bisco: Third GSoC Report

15 June, 2018 - 12:28

The last two weeks went by pretty fast, probably also because the last courses this semester started and i have a lot of additional work to do.

I closed the last report with writing about the implementation of the test suite. I’ve added a lot more tests since then and there are now around 80 tests that are run with every commit. Using unit tests that do some basic testing really makes life a lot easier- next time i start a software project i’ll definitly start early on with writing tests. I’ve also read a bit about the difference of integration and unit tests. A unit test should only test one specific functionality, so i refactored some of the old tests and made them more granular.

I then also looked into coding style checkers and decided to go with flake8. There were a huge pile of coding style violations in my code, most of them lines that were more than 79 characters. I’ve integrated flake8 in the test suite and removed all the violations. One more thing about python: i’ve read python3 with pleasure which gives a great overview about some of the new features of python3 and i’ve made some notes about stuff i want to integrate (i.e. pathlib)

Regarding the functionality of nacho i’ve added the possibility to delete an account. SSH keys are now validated on upload and it is possilbe to configure the key types that are allowed. I initially just checked if the key string consists of valid base64 encoded data, but that was not really a good solution so i decided to use sshpubkeys to check the validity of the keys. Nacho now also checks the profile image before storing it in the LDAP database- it is possible to configure the image size and list allowed image types, which is verified using python-magic. I also made a big change concerning the configuration: all the relevant configuration options are now moved to a seperate configuration file in json format, which is parsed when nacho is started. This makes it also a lot easier to have default values and to let users override them in their local config. I also updated the documentation and the debian package.

Now that the issues with nacho are slowly becoming smaller, i’ll start to look into existing SSO solutions that then can be used with the LDAP backend. There are four solutions i’ve on my list at the moment, that are keycloak, ipsilon, lemonldap-ng and glewlwyd.

Gunnar Wolf: «Understanding the Digital World» — By Brian Kernighan

15 June, 2018 - 07:07

I came across Kernighan's 2017 book, Understanding the Digital World — What You Need to Know about Computers, the Internet, Privacy, and Security. I picked it up thanks to a random recommendation I read somewhere I don't recall. And it's really a great read.
Of course, basically every reader that usually comes across this blog will be familiar with Kernighan. Be it because his most classic books from the 1970s, The Unix Programming Environment or The C Programming Language, or from the much more recent The Practice of Programming or The Go Programming Language, Kernighan is a world-renowned authority for technical content, for highly technical professionals at the time of their writing — And they tend to define the playing field later on.
But this book I read is... For the general public. And it is superb at that.
Kernighan states in his Preface that he teaches a very introductory course at Princeton (a title he admits to be too vague, Computers in our World) to people in the social sciences and humanities field. And this book shows how he explains all sorts of scary stuff to newcomers.
As it's easier than doing a full commentary on it, I'll just copy the table of contents (only to the section level, it gets just too long if I also list subsections). The list of contents is very thorough (and the book is only 238 pages long!), but take a look at basically every chapter... And picture explaining those topics to computing laymen. An admirable feat!

  • Part I: Hardware
    • 1. What's in a computer?
      • Logical construction
      • Physical construction
      • Moore's Law
      • Summary
    • 2. Bits, Bytes, and Representation of Information
      • Analog versus Digital
      • Analog-Digital Conversion
      • Bits, Bytes and Binary
      • Summary
    • 3. Inside the CPU
      • The Toy Computer
      • Real CPUs
      • Caching
      • Other Kinds of Computers
      • Summary

    Wrapup on Hardware

  • Part II: Software
    • 4. Algorithms
      • Linear Algorithms
      • Binary Search
      • Sorting
      • Hard Problems and Complexity
      • Summary
    • 5. Programming and Programming Languages
      • Assembly Language
      • High Level Languages
      • Software Development
      • Intellectual Property
      • Standards
      • Open Source
      • Summary
    • 6. Software Systems
      • Operating Systems
      • How an Operating System works
      • Other Operating Systems
      • File Systems
      • Applications
      • Layers of Software
      • Summary
    • 7. Learning to Program
      • Programming Language Concepts
      • A First JavaScript Example
      • A Second JavaScript Example
      • Loops
      • Conditionals
      • Libraries and Interfaces
      • How JavaScript Works
      • Summary

    Wrapup on Software

  • Part III: Communications
    • 8. Networks
      • Telephones and Modems
      • Cable and DSL
      • Local Area Networks and Ethernet
      • Wireless
      • Cell Phones
      • Bandwidth
      • Compression
      • Error Detection and Correction
      • Summary
    • The Internet
      • An Internet Overview
      • Domain Names and Addresses
      • Routing
      • TCP/IP protocols
      • Higher-Level Protocols
      • Copyright on the Internet
      • The Internet of Things
      • Summary
    • 10. The World Wide Web
      • How the Web works
      • HTML
      • Cookies
      • Active Content in Web Pages
      • Active Content Elsewhere
      • Viruses, Worms and Trojan Horses
      • Web Security
      • Defending Yourself
      • Summary
    • 11. Data and Information
      • Search
      • Tracking
      • Social Networks
      • Data Mining and Aggregation
      • Cloud Computing
      • Summary
    • 12. Privacy and Security
      • Cryptography
      • Anonymity
      • Summary
    • 13. Wrapping up

I must say, I also very much enjoyed learning of my overall ideological alignment with Brian Kernighan. I am very opinionated, but I believe he didn't make me do a even mild scoffing — and he goes to many issues I have strong feelings about (free software, anonymity, the way the world works...)
So, maybe I enjoyed this book so much because I enjoy teaching, and it conveys great ways to teach the topics I'm most passionate about. But, anyway, I have felt for several days the urge to share this book with the group of people that come across my blog ☺

Kees Cook: security things in Linux v4.17

15 June, 2018 - 06:23

Previously: v4.16.

Linux kernel v4.17 was released last week, and here are some of the security things I think are interesting:

Jailhouse hypervisor

Jan Kiszka landed Jailhouse hypervisor support, which uses static partitioning (i.e. no resource over-committing), where the root “cell” spawns new jails by shrinking its own CPU/memory/etc resources and hands them over to the new jail. There’s a nice write-up of the hypervisor on LWN from 2014.

Sparc ADI

Khalid Aziz landed the userspace support for Sparc Application Data Integrity (ADI or SSM: Silicon Secured Memory), which is the hardware memory coloring (tagging) feature in Sparc M7. I’d love to see this extended into the kernel itself, as it would kill linear overflows between allocations, since the base pointer being used is tagged to belong to only a certain allocation (sized to a multiple of cache lines). Any attempt to increment beyond, into memory with a different tag, raises an exception. Enrico Perla has some great write-ups on using ADI in allocators and a comparison of ADI to Intel’s MPX.

new kernel stacks cleared on fork

It was possible that old memory contents would live in a new process’s kernel stack. While normally not visible, “uninitialized” memory read flaws or read overflows could expose these contents (especially stuff “deeper” in the stack that may never get overwritten for the life of the process). To avoid this, I made sure that new stacks were always zeroed. Oddly, this “priming” of the cache appeared to actually improve performance, though it was mostly in the noise.


As part of further defense in depth against attacks like Stack Clash, Michal Hocko created MAP_FIXED_NOREPLACE. The regular MAP_FIXED has a subtle behavior not normally noticed (but used by some, so it couldn’t just be fixed): it will replace any overlapping portion of a pre-existing mapping. This means the kernel would silently overlap the stack into mmap or text regions, since MAP_FIXED was being used to build a new process’s memory layout. Instead, MAP_FIXED_NOREPLACE has all the features of MAP_FIXED without the replacement behavior: it will fail if a pre-existing mapping overlaps with the newly requested one. The ELF loader has been switched to use MAP_FIXED_NOREPLACE, and it’s available to userspace too, for similar use-cases.

pin stack limit during exec

I used a big hammer and pinned the RLIMIT_STACK values during exec. There were multiple methods to change the limit (through at least setrlimit() and prlimit()), and there were multiple places the limit got used to make decisions, so it seemed best to just pin the values for the life of the exec so no games could get played with them. Too much assumed the value wasn’t changing, so better to make that assumption actually true. Hopefully this is the last of the fixes for these bad interactions between stack limits and memory layouts during exec (which have all been defensive measures against flaws like Stack Clash).

Variable Length Array removals start

Following some discussion over Alexander Popov’s ongoing port of the stackleak GCC plugin, Linus declared that Variable Length Arrays (VLAs) should be eliminated from the kernel entirely. This is great because it kills several stack exhaustion attacks, including weird stuff like stepping over guard pages with giant stack allocations. However, with several hundred uses in the kernel, this wasn’t going to be an easy job. Thankfully, a whole bunch of people stepped up to help out: Gustavo A. R. Silva, Himanshu Jha, Joern Engel, Kyle Spiers, Laura Abbott, Lorenzo Bianconi, Nikolay Borisov, Salvatore Mesoraca, Stephen Kitt, Takashi Iwai, Tobin C. Harding, and Tycho Andersen. With Linus Torvalds and Martin Uecker, I also helped rewrite the max() macro to eliminate false positives seen by the -Wvla compiler option. Overall, about 1/3rd of the VLA instances were solved for v4.17, with many more coming for v4.18. I’m hoping we’ll have entirely eliminated VLAs by the time v4.19 ships.

That’s in for now! Please let me know if you think I missed anything. Stay tuned for v4.18; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Athos Ribeiro: Some notes on the OBS Documentation

14 June, 2018 - 11:06

This is my fourth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

Open Build Service Manuals

OBS provides several manuals on their web site, including an admin and a user guide. Since I needed to travel to an academic conference last week (too many hours in airplanes), I took some time to read the full OBS documentation to have a better understanding of the tool we have been deploying. While reading the documentation, I took some notes on relevant points for our GSoC project (and sent a patch to fix a few typos in OBS documentaion, which I discuss below.

Hardware requirements

There is no need to distribute all different services in OBS server since our instance will not process heavy build loads. We do want to separate the server services from the OBS workers (package builders) so expensive builds will not compromise our server performance.

According to OBS documentation, we need

  • 1 core for each scheduler architecture
  • 4GB ram for each scheduler architecture
  • 50GB disk per architecture for each build distribution supported

We are working with a single build distribution (Debian unstable). Therefore, we only need 50GB extra disk for our OBS instance (unless we want to mirror the whole distribution instead of using the Download on Demand OBS feature).

We would like to work with 3 different architectures: i686, x86_64 and arm. Hence, we need 12GB ram and 3 cores.

  • 12GB RAM
  • 60GB disk
  • 3 cores
OBS Instance Configuration

We want to change some instance configurations like

  • Change OBS instance description
  • Set administrator email
  • Disable new users sign up: since all builds in this OBS instance will be fired automatically and no new projects will be configured for now, we will not allow people to create accounts in our OBS instance.

It is important to note that the proper way of changing a project’s configuration is through the API calls. Therefore, we will need to make such calls in our salt scripts.

To list OBS configurations:

osc -A api /configuration

To redefine OBS configurations:

osc -A api /configuration -T new_config_file.xml
Workers configuration

OBS workers need to be allowed to connect to the server in /etc/obs/ The server accepts connections from any node in the network by default, but we can (and should) force OBS to accept connections only from our own nodes.

Source Services

OBS provide a way to run scripts to change sources before builds. This may be useful for building against Clang.

To create a source service, we must create a script in the /usr/lib/obs/service/ directory and create a new _service file either in the package or in the project repository level.

_service is a XML file pointing to our script under /usr/lib/obs/service/ and providing possible parameters to the script:

 <service name="" mode="MODE">
 <param name="PARAMETER1">PARAMETER1_VALUE</param>
Self signed certificates

For testing purposes, there is no need to generate proper SSL certificates, we can generate and self sign our own:

mkdir /srv/obs/certs
openssl genrsa -out /srv/obs/certs/server.key 1024
openssl req -new -key /srv/obs/certs/server.key -out /srv/obs/certs/server.csr
openssl x509 -req -days 365 -in /srv/obs/certs/server.csr -signkey /srv/obs/certs/server.key -out /srv/obs/certs/server.crt
cat /srv/obs/certs/server.key /srv/obs/certs/server.crt > /srv/obs/certs/server.pem

Finally, we must trust our certificate:

cp /srv/obs/certs/server.pem /etc/ssl/certs/
c_rehash /etc/ssl/certs/
Message bus

OBS supports rabbitMQ usage to publish events such as build results, package updates, etc. In the future, we could also set a rabbitMQ instance so other services can listen to a queue with our Clang build results.

Next steps (A TODO list to keep on the radar)
  • Write patches for the OBS worker issue described in post 3
  • Configure Debian projects on OBS with salt, not manually
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)

Louis-Philippe Véronneau: IMAP Spam Begone (ISBG) version 2.1.0 is out!

14 June, 2018 - 11:00

When I first started at the non-profit where I work, one of the problems people had was rampant spam on their email boxes. The email addresses we use are pretty old (+10 years) and over time they have been added to all the possible spam lists there are.

That would not be a real problem if our email hosting company did not have very bad spam filters. They are a worker's coop and charge us next to nothing for hosting our emails, but sadly they lack the resources to run a real bayesian-based spam filtering solution like SpamAssassin. "Luckily" for us, it seems that a lot of ISPs and email hosting enterprises also tend to have pretty bad spam filtering on the email boxes they provide and there were a few programs out there to fix this.

One of the solutions I found to alleviate this problem was to use IMAP Spam Begone (ISBG), a script that makes it easy to scan an IMAP inbox for spam using your own SpamAssassin server and get your spam moved around via IMAP. Since then, I've been maintaining the upstream project.

At the time, ISBG was somewhat abandoned and was mostly a script made of old python2 code. No classes, no functions, just a long script that ran from top to bottom.

Well, I'm happy to say that ISBG now has a new major release! Version 2.1.0 is out and replaces the last main release, 1.0.0. From a script, ISBG has now evolved into a full-fledged python module using classes and functions. Although the code still works with python2, everything is now python3 compliant as well. We even started using CI tests recently!

That, and you know, tons of bugs were fixed. I'd like to thank all the folks who submitted patches, as very few of the actual code was written by me.

If you want to give ISBG a try, you can find the documentation here. Here's also a nice terminal capture I made of ISBG working in verbose mode:

Elana Hashman: Looking back on "Teaching Python: The Hard Parts"

14 June, 2018 - 06:00

One of my goals when writing talks is to produce content with a long shelf life. Because I'm one of those weird people that prefers to write new talks for new events, I feel like it'd be a waste of effort if my talks didn't at least age well. So how do things measure up if I look back on one of my oldest?

"Teaching Python: The Hard Parts" remains one of my most popular talks, despite presenting it just one time at PyCon US 2016. For most of the past two years, it held steady as a top 10 talk from PyCon 2016 by popularity on YouTube (although it was recently overtaken by a few hundred views 😳), even when counting it against the keynotes (!), and most of the YouTube comments are shockingly nice (!!).

Well, actually

Not everyone was a fan. Obviously I should have known better than to tell instructors they didn't have to use Python 3:

Did I give bad advice? Was mentiontioning the usability advantage of better library support and documentation SEO with Python 2 worth the irreparable damage I might have done to the community?

Matt's not the only one with a chip on his shoulder: the Python 2 → 3 transition has been contentious, and much ink has been spilled on the topic. A popular author infamously wrote a long screed claiming "PYTHON 3 IS SUCH A FAILURE IT WILL KILL PYTHON". Spoiler alert: Python is still alive, and the author updated his book for Python 3.

I've now spent a few years writing 2/3 compatible code, and am on the cusp of dropping Python 2 entirely. I've felt bad for not weighing in on the topic publicly, because people might have looked to this talk for guidance and wouldn't know my advice has changed over the past two years.

A little history

I wrote this talk based on my experiences teaching Python in the winter and fall of 2014, and presented it in early 2016. Back then, it wasn't clear if Python 3 adoption was going to pick up: Hynek wrote an article about Python 3 adoption a few months before PyCon that contained the ominous subheading "GLOOM". Python 3 only reached a majority mindshare of Python developers in May 2017!

Why? That's a topic long enough to fill a series of blog posts, but briefly: the number of breaking changes introduced in the first few releases in Python 3, coupled with the lack of compelling features to incentivize migration led to slow adoption. Personally, until the Python 3.3 release, I don't think Python 3 had that balance right to really take off. Version 3.3 was released in fall of 2012. Python 3.4 was only released in early 2014, just before I mentored at my first set of workshops!

This is a long-winded way to say, "when I gave this talk, it wasn't clear that telling workshop organizers to teach Python 3 would be good advice, because the ecosystem wasn't there yet."

The brave new world

But this is no longer the case! Python 3 adoption is overtaking Python 2 use, even in the enterprise space. The Python 2 clock keeps on ticking. Latest releases of Python 3 have compelling features to lure users, including strong, native concurrency support, formatted strings, better cross-system path support, and type hints.

This is to say, if I had to pick just one change to make to this talk if I gave it today, I would tell folks

USE PYTHON 3! ✨ Other updates
  • The documentation for packaging Python is a lot better now. There have been many good talks presented on the subject.
  • Distributing Python is still hard. There isn't a widely adopted practice for cross-platform management of compiled dependencies yet, although wheels are picking up steam. I'm currently working on the manylinux2010 update to address this problem on Linux systems.

Not to let one YouTube commenter rain on my parade, I am thrilled to say that some people in the community have written some awfully nice things about my talk. Thanks to all for doing so—pulling this together really brightened my day!

Blog Posts

Roxanne Johnson writes, "Elana Hashman’s PyCon talk on Teaching Python: The Hard Parts had me nodding so hard I thought I might actually be headbanging." 😄

Georgia Reh writes, "I am just in love with this talk. Any one who has seen me speak about teaching git knows I try really hard to not overload students with information, and Elana has a very clear idea of what a beginner needs to know when learning python versus what they can learn later." 💖


When I presented this talk, I was too shy to attach my twitter handle to my slides, so all these folks tweeted at me by name. Wow!

Elana Hashman's talk on teaching programming has a lot of practical take-aways! @pycon #pycon2016

— Susan Tan (@ArcTanSusan) June 1, 2016

Learned a bunch from this video: Elana Hashman - Teaching Python: The Hard Parts - PyCon 2016 ::

— Mita Williams (@copystar) June 2, 2016

Muy buenos consejos en la presentación de Elana Hashman - Teaching Python: The Hard Parts - #PyCon2016

— José Carlos García (@quobit) June 11, 2016

Elana cool talk! Explaining scope: maybe use analogy of person having same name as a celebrity @pycon #pycon2016

— robin chauhan (@robinc) July 17, 2016

If you're about to teach a hands-on #Python workshop/tutorial/class, go watch Elana Hashman's #pycon2016 talk 🎓🐍💖

— ✨ Trey Hunner 🐍 (@treyhunner) November 11, 2016 Other

My talk was included in the "Awesome Python in Education" list. How cool 😎

Declaring a small victory

Writing this post has convinced me that "Teaching Python: The Hard Parts" meets some arbitrary criteria for "sufficiently forward-thinking." Much of the content still strikes me as fresh: as an occasional mentor for various technical workshops, I still keep running into trouble with platform diversity, the command line, and packaging; the "general advice" section is evergreen for Python and non-Python workshops alike. So with all that said, here's hoping that looking back on this talk will keep it alive. Give it a watch if you haven't seen it before!

If you like what you see and you're interested in checking out my speaking portfolio or would like to invite me to speak at your conference or event, do check out my talks page.

Shirish Agarwal: students, suicides, pressures and solutions.

14 June, 2018 - 02:07

Couple of days back, I heard of a student whose body was found hung from the ceiling in a college nearby. It felt a bit shocked as I had visited that college just sometime back. It is also possible that I may have run into him and even had a conversation with him. No names were shared and even if there were shared it’s doubtful I would remember him as during events you meet so many people, it’s difficult to parse and remember names . I do feel sort of stretched at events but that’s life I guess.

As no suicide note was found, the police are investigating from all angles as to the nature of the death. While it’s too early to come to conclusions whether the student decided to take his own life or someone else decided to end his life for some reason or the other, I saw that nobody whom I talked to felt perturbed even a tiny bit probably because it has become a new normal. The major reasons apart from those shared in a blog post are that the costs of the education is too high for today’s students.

There are also perceived career biases that people have, believing that Computer Science is better than being a lawyer, even though IT layoffs have become a new normal. In the above specific case, it was reported that apparently the student who killed himself wanted to be a lawyer while the family wanted him to do CS (Computer Science) .

Also the whole reskilling and STEM culture may be harder as at least Government syllabuses are 10-15 years too late. The same goes for the teachers who would have to change a lot and sadly, it is too common for teachers to be paid a pittance, even college professors.

I know of quite a few colleges in the city in different domains where suicides have taken place, the authorities have tried putting wellness rooms where students who feel depressed could share their feelings but probably due to feelings of shame or weaknesses, the ones who are most at risk do not allow the true feelings to surface. The eastern philosophy of ‘saving face’ is killing our young ones. There is one non-profit I know, Connecting NGO 18002094353 (toll-free) and 9922001122 (mobile) that students or whoever is in depression can call. The listeners don’t give any advice as they are not mental health experts but just give a patient hearing. Sometimes sharing or describing whatever you are facing may give enough either hope or a mini-solution that you can walk towards.

I hope people would use the resources listed above.

Sean Whitton: Debian Policy call for participation -- June 2018

13 June, 2018 - 21:15

I’d like to push a substantive release of Policy but I’m waiting for DDs to review and second patches in the following bugs. I’d be grateful for your involvement!

If a bug already has two seconds, or three seconds if the proposer of the patch is not a DD, please consider reviewing one of the others, instead, unless you have a particular interest in the topic of the bug.

If you’re not a DD, you are welcome to review, but it might be a more meaningful contribution to spend your time writing patches bugs that lack them, instead.

#786470 [copyright-format] Add an optional “License-Grant” field

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

#880920 Document Rules-Requires-Root field

#891216 Requre d-devel consultation for epoch bump

#897217 Vcs-Hg should support -b too

Enrico Zini: Progress bar for file descriptors

13 June, 2018 - 18:43

I ran gzip on an 80Gb file, it's processing, but who knows how much it has done yet, and when it will end? I wish gzip had a progressbar. Or MySQL. Or…

Ok. Now every program that reads a file sequentially can have a progressbar:


Print progress indicators for programs that read files sequentially.

fdprogress monitors file descriptor offsets and prints progressbars comparing them to file sizes.

Pattern can be any glob expression.

usage: fdprogress [-h] [--verbose] [--debug] [--pid PID] [pattern]

show progress from file descriptor offsets

positional arguments:
  pattern            file name to monitor

optional arguments:
  -h, --help         show this help message and exit
  --verbose, -v      verbose output
  --debug            debug output
  --pid PID, -p PID  PID of process to monitor

Norbert Preining: Microsoft fixed the Open R Debian package

13 June, 2018 - 17:09

I just got notice that Microsoft has updated the Debian packaging of Open R to properly use dpkg-divert. I checked the Debian packaging scripts and they now properly divert R and Rscript, and revert back to the Debian provided (r-base) version after removal of the packages.

The version 3.5.0 has been rereleased, if you have downloaded it from MRAN you will need to redownload the file and be careful to use the new one, the file name of the downloaded file is the same.

Thanks Microsoft for the quick fix, it is good news that those playing with Open R will not be left with a hosed system.

PS: I guess this post will by far not get the incredible attention the first one got

Jonathan McDowell: Hooking up Home Assistant to Alexa + Google Assistant

13 June, 2018 - 03:21

I have an Echo Dot. Actually I have two; one in my study and one in the dining room. Mostly we yell at Alexa to play us music; occasionally I ask her to set a timer, tell me what time it is or tell me the news. Having setup Home Assistant it seemed reasonable to try and enable control of the light in the dining room via Alexa.

Perversely I started with Google Assistant, even though I only have access to it via my phone. Why? Because the setup process was a lot easier. There are a bunch of hoops to jump through that are documented on the Google Assistant component page, but essentially you create a new home automation component in the Actions on Google interface, connect it with the Google OAuth stuff for account linking, and open up your Home Assistant instance to the big bad internet so Google can connect.

This final step is where I differed from the provided setup. My instance is accessible internally at home, but I haven’t wanted to expose it externally yet (and I suspect I never well, but instead have the ability to VPN back in to access or similar). The default instructions need you to open up API access publicly, and configure up Google with your API password, which allows access to everything. I’d rather not.

So, firstly I configured up my external host with an Apache instance and a Let’s Encrypt cert (luckily I have a static IP, so this was actually the base host that the Home Assistant container runs on). Rather than using this to proxy the entire Home Assistant setup I created a unique /external/google/randomstring proxy just for the Google Assistant API endpoint. It looks a bit like this:

<VirtualHost *:443>

  ProxyPreserveHost On
  ProxyRequests off

  RewriteEngine on

  # External access for Google Assistant
  ProxyPassReverse /external/google/randomstring http://hass-host:8123/api/google_assistant
  RewriteRule ^/external/google/randomstring$ http://hass-host:8123/api/google_assistant?api_password=myapipassword [P]
  RewriteRule ^/external/google/randomstring/auth$ http://hass-host:8123/api/google_assistant/auth?%{QUERY_STRING}&&api_password=myapipassword [P]

  SSLEngine on
  SSLCertificateFile /etc/ssl/
  SSLCertificateKeyFile /etc/ssl/private/
  SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt

This locks down the external access to just being the Google Assistant end point, and means that Google have a specific shared secret rather than the full API password. I needed to configure up Home Assistant as well, so configuration.yaml gained:

  project_id: homeautomation-8fdab
  client_id: oFqHKdawWAOkeiy13rtr5BBstIzN1B7DLhCPok1a6Jtp7rOI2KQwRLZUxSg00rIEib2NG8rWZpH1cW6N
  access_token: l2FrtQyyiJGo8uxPio0hE5KE9ZElAw7JGcWRiWUZYwBhLUpH3VH8cJBk4Ct3OzLwN1Fnw39SR9YArfKq
  api_key: nyAxuFoLcqNIFNXexwe7nfjTu2jmeBbAP8mWvNea
    - light

Setting up Alexa access is more complicated. Amazon Smart Home skills must call an AWS Lambda - the code that services the request is essential a small service run within Lambda. Home Assistant supports all the appropriate requests, so the Lambda code is a very simple proxy these days. I used Haaska which has a complete setup guide. You must do all 3 steps - the OAuth provider, the AWS Lambda and the Alexa Skill. Again, I wanted to avoid exposing the full API or the API password, so I forked Haaska to remove the use of a password and instead use a custom URL. I then added the following additional lines to the Apache config above:

# External access for Amazon Alexa
ProxyPassReverse /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home
RewriteRule /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home?api_password=myapipassword [P]

In the config.json I left the password field blank and set url to configuration.yaml required less configuration than the Google equivalent:

        - light.dining_room_lights
        - light.living_room_lights
        - light.snug

(I’ve added a few more lights, but more on the exact hardware details of those at another point.)

To enable in Alexa I went to the app on my phone, selected the “Smart Home” menu option, enabled my Home Assistant skill and was able to search for the available devices. I can then yell “Alexa, turn on the snug” and magically the light turns on.

Aside from being more useful (due to the use of the Dot rather than pulling out a phone) the Alexa interface is a bit smoother - the command detection is more reliable (possibly due to the more limited range of options it has to work out?) and adding new devices is a simple rescan. Adding new devices with Google Assistant seems to require unlinking and relinking the whole setup.

The only problem with this setup so far is that it’s only really useful for the room with the Alexa in it. Shouting from the living room in the hope the Dot will hear is a bit hit and miss, and I haven’t yet figured out a good alternative method for controlling the lights there that doesn’t mean using a phone or a tablet device.

John Goerzen: Syncing with a memory: a unique use of tar –listed-incremental

12 June, 2018 - 18:27

I have a Nextcloud instance that various things automatically upload photos to. These automatic folders sync to a directory on my desktop. I wanted to pull things out of that directory without deleting them, and only once. (My wife might move them out of the directory on her computer, and I might arrange them into targets on my end.)

In other words, I wanted to copy a file from a source to a destination, but remember what had been copied before so it only ever copies once.

rsync doesn’t quite do this. But it turns out that tar’s listed-incremental feature can do exactly that. Ordinarily, it would delete files that were deleted on the source. But if we make the tar file with the incremental option, but extract it without, it doesn’t try to delete anything at extract time.

Here’s my synconce script:


set -e

if [ -z "$3" ]; then
    echo "Syntax: $0 snapshotfile sourcedir destdir"
    exit 5

SNAPFILE="$(realpath "$1")"
DESTDIR="$(realpath "$3")"

cd "$SRCDIR"
if [ -e "$SNAPFILE" ]; then
    cp "$SNAPFILE" "${SNAPFILE}.new"
tar "--listed-incremental=${SNAPFILE}.new" -cpf - . | \
    tar -xf - -C "$DESTDIR"
mv "${SNAPFILE}.new" "${SNAPFILE}"

Just have the snapshotfile be outside both the sourcedir and destdir and you’re good to go!


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้