Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 42 min 42 sec ago

Benjamin Mako Hill: UW Stationery in LaTeX

5 April, 2018 - 01:53

The University of Washington’s brand page recently started publishing letterhead templates that departments and faculty can use for official communication. Unfortunately, they only provide them in Microsoft Word DOCX format.

Because my research group works in TeX for everything, Sayamindu Dasgupta and I worked together to create a LaTeX version of the “Matrix Department Signature Template” (the DOCX file is available here). We figured other folks at UW might be interested in it as well.

The best way to get the template to use it yourself is to clone it from git (git clone git://code.communitydata.cc/uw_tex_letterhead.git). If you notice issues or if you want to create branches with either of the other two types of official UW stationary, patches are always welcome (instructions on how to make and send patches is here)!

Because the template relies on two OpenType fonts, it requires XeTeX. A detailed list of the dependencies is provided in the README file. We’ve only run it on GNU/Linux (Debian and Arch) but it should work well on any operating system that can run XeTeX as well as web-based TeX systems like ShareLaTeX.

And although we created the template, keep in mind that we don’t manage UW’s brand identity in anyway. If you have any questions or concerns about if and when you should use the letterhead, you should contact brand and creative services with the contact information on the stationery page.

John Goerzen: Emacs #5: Documents and Presentations with org-mode

4 April, 2018 - 23:14
The Emacs series

This is fifth in a series on Emacs and org-mode.

This blog post was generated from an org-mode source and is available as: a blog page, slides (PDF format), and a PDF document.

1 About org-mode exporting 1.1 Background

org-mode isn't just an agenda-making program. It can also export to lots of formats: LaTeX, PDF, Beamer, iCalendar (agendas), HTML, Markdown, ODT, plain text, man pages, and more complicated formats such as a set of web pages.

This isn't just some afterthought either; it's a core part of the system and integrates very well.

One file can be source code, automatically-generated output, task list, documentation, and presentation, all at once.

Some use org-mode as their preferred markup format, even for things like LaTeX documents. The org-mode manual has an extensive section on exporting.

1.2 Getting started

From any org-mode document, just hit C-c C-e. From there will come up a menu, letting you choose various export formats and options. These are generally single-key options so it's easy to set and execute. For instance, to export a document to a PDF, use C-c C-e l p or for HTML export, C-c C-e h h.

There are lots of settings available for all of these export options; see the manual. It is, in fact, quite possible to use LaTeX-format equations in both LaTeX and HTML modes, to insert arbitrary preambles and settings for different modes, etc.

1.3 Add-on packages

ELPA containts many addition exporters for org-mode as well. Check there for details.

2 Beamer slides with org-mode 2.1 About Beamer

Beamer is a LaTeX environment for making presentations. Its features include:

  • Automated generating of structural elements in the presentation (see, for example, the Marburg theme). This provides a visual reference for the audience of where they are in the presentation.
  • Strong help for structuring the presentation
  • Themes
  • Full LaTeX available
2.2 Benefits of Beamer in org-mode

org-mode has a lot of benefits for working with Beamer. Among them:

  • org-mode's very easy and strong support for visualizing and changing the structure makes it very quick to reorganize your material.
  • Combined with org-babel, live source code (with syntax highlighting) and results can be embedded.
  • The syntax is often easier to work with.

I have completely replaced my usage of LibreOffice/Powerpoint/GoogleDocs with org-mode and beamer. It is, in fact, rather frustrating when I have to use one of those tools, as they are nowhere near as strong as org-mode for visualizing a presentation structure.

2.3 Headline Levels

org-mode's Beamer export will convert sections of your document (defined by headings) into slides. The question, of course, is: which sections? This is governed by the H export setting (org-export-headline-levels).

There are many ways to go, which suit people. I like to have my presentation like this:

#+OPTIONS: H:2
#+BEAMER_HEADER: \AtBeginSection{\frame{\sectionpage}}

This gives a standalone section slide for each major topic, to highlight major transitions, and then takes the level 2 (two asterisks) headings to set the slide. Many Beamer themes expect a third level of indirection, so you would set H:3 for them.

2.4 Themes and settings

You can configure many Beamer and LaTeX settings in your document by inserting lines at the top of your org file. This document, for instance, defines:

#+TITLE:  Documents and presentations with org-mode
#+AUTHOR: John Goerzen
#+BEAMER_HEADER: \institute{The Changelog}
#+PROPERTY: comments yes
#+PROPERTY: header-args :exports both :eval never-export
#+OPTIONS: H:2
#+BEAMER_THEME: CambridgeUS
#+BEAMER_COLOR_THEME: default
2.5 Advanced settings

I like to change some colors, bullet formatting, and the like. I round out my document with:

# We can't just +BEAMER_INNER_THEME: default because that picks the theme default.
# Override per https://tex.stackexchange.com/questions/11168/change-bullet-style-formatting-in-beamer
#+BEAMER_INNER_THEME: default
#+LaTeX_CLASS_OPTIONS: [aspectratio=169]
#+BEAMER_HEADER: \definecolor{links}{HTML}{0000A0}
#+BEAMER_HEADER: \hypersetup{colorlinks=,linkcolor=,urlcolor=links}
#+BEAMER_HEADER: \setbeamertemplate{itemize items}[default]
#+BEAMER_HEADER: \setbeamertemplate{enumerate items}[default]
#+BEAMER_HEADER: \setbeamertemplate{items}[default]
#+BEAMER_HEADER: \setbeamercolor*{local structure}{fg=darkred}
#+BEAMER_HEADER: \setbeamercolor{section in toc}{fg=darkred}
#+BEAMER_HEADER: \setlength{\parskip}{\smallskipamount}

Here, aspectratio=169 sets a 16:9 aspect ratio, and the remaining are standard LaTeX/Beamer configuration bits.

2.6 Shrink (to fit)

Sometimes you've got some really large code examples and you might prefer to just shrink the slide to fit.

Just type C-c C-x p, set the BEAMER_opt property to shrink=15.

(Or a larger value of shrink). The previous slide uses this here.

2.7 Result

Here's the end result:

3 Interactive Slides 3.1 Interactive Emacs Slideshows

With the org-tree-slide package, you can display your slideshow from right within Emacs. Just run M-x org-tree-slide-mode. Then, use C-> and C-< to move between slides.

You might find C-c C-x C-v (which is org-toggle-inline-images) helpful to cause the system to display embedded images.

3.2 HTML Slideshows

There are a lot of ways to export org-mode presentations to HTML, with various levels of JavaScript integration. See the non-beamer presentations section of the org-mode wiki for details.

4 Miscellaneous 4.1 Additional resources to accompany this post 4.2 Up next in my Emacs series…

mu4e for email!

Alexander Reichle-Schmehl: Automatic OTRS ticket creation for Debian package updates

4 April, 2018 - 22:00
I recently stumbled about the problem, that I have to create specific tickets in our OTRS system for package upgrades on our Debian servers. Background is, that we use OTRS for our ITIL change management processes.

To be more precise: The actual problem isn't to create the tickets - there are plenty of tools to do that, but to create them in a way I found them useful. apticron not only sends a mail for pending package upgrades (and downloads them if you want to), it also calls apt-listchanges which will show you the changelog entries of the packages you are about to install. You see so not only that you have to install upgrades, but also why.

However, I didn't found a way to change the mail, or add a specific header to the mail. Which would have been a big plus - as you can remote control OTRS via e-mail header quite a lot. And as I am lazy, that is something I definitely wanted to have.

Same for unattended-upgrades. Nice tool, but doesn't allow to change the mail content / header. (At least I didn't found a way to do so.)

I was pleasantly surprised, that cron-apt not only allows to add headers, it also lists some examples for OTRS headers in its documentation! However, by default it only lists (and downloads) packages to be upgrades. No changelogs. There is an open whishlist bug to get this feature added, but considering the age of the bug, I wouldn't hold my breath till it is implemented ;)

There is a solution for this problem, though. Although it is a bit ugly: And as I'm apparently not the only one interested in it, I'll write it down here (partly because I'm interested to find out, if my blog is still working after quite some years of inactivity). The basic idea is to call apt-listchanges on all deb files in /var/cache/apt/archives/. As there might be some cruft laying around, you'll have to run apt-clean before that. As we have a proxy and enough bandwidth that is acceptable in our case.

First you'll have to install the cron-apt and apt-listchanges. Add a file into /etc/cron-apt/action.d/1-clean containing: clean. This will cause cron-apt to call apt-get clean on each invocation and this cleaning all files in /var/cache/apt/archives. Next create a file /etc/cron-apt/action.d/4-listchanges containing the line:/var/cache/apt/archives/*.deb and a file /etc/cron-apt/config.d/4-listchanges containing the lines:

APTCOMMAND=/usr/bin/apt-listchanges
OPTIONS=""

Finally we have to configure cron-apt to actually mail our stuff. Thus create /etc/cron-apt/config similar and add the following:

# where to send mails
MAILTO="otrs@ourcompany.example"
# mail, when the apt calls create output (see documentation for other options)
MAILON="output"

XHEADER1="X-OTRS-Priority: 3 normal - prio 3"
XHEADER2="X-OTRS-Queue: The::Queue::You:Want::it::in"
XHEADER3="X-OTRS-SenderType: system"
XHEADER4="X-OTRS-Loop: false"
XHEADER5="X-OTRS-DynamicField-...: ..."
..
..
..

Bingo! Automated tickets for our change management process with all information required and automated as much as possible to avoid clicking through the web interface.

Lucy Wayland: The Big Decision

4 April, 2018 - 20:53

I care for you all
I love you all
In so many different ways
I made an oath
And I live by the first law
Thou shall harm none
I swore to the triple Goddesses
That I would not take my life
By my own hand
Today I took part of my life away
It was necessary
It had to be done
But part of me died today
Please say goodbye
To the Lucy you knew and loved
She will never be the same

Rapha&#235;l Hertzog: My Free Software Activities in March 2018

4 April, 2018 - 17:22

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

I reviewed and merged 14 merge requests from multiple contributors:

On top of this, I updated the Salsa/AliothMigration wiki page with information about how to best leverage tracker.debian.org when you migrate to salsa.

I also filed a few issues for bugs or things that I’d like to see improved:

I also gave my feedback about multiple mockups prepared by Chirath R in preparation of his Google Summer of Code project proposal.

Security Tools Packaging Team

Following the departure of alioth, the new list that we requested on lists.debian.org has been created: https://lists.debian.org/debian-security-tools/

I updated (in the git repositories) all the Vcs-* and all the Maintainer fields of the packages maintained by the team.

I prepared and uploaded afflib 3.7.16-3 to fix RC bug #892599. I sponsored rhash 1.3.6 for Aleksey Kravchenko, ccrypt 1.10-5 for Alexander Kulak and ledger-wallets-udev 0.1 for Stephne Neveu.

Debian Live

This project also saw an unexpected resurgence of activity and I had to review and merge many merge requests:

It’s nice to see two derivatives being so active in upstreaming their changes.

Misc stuff

Hamster time tracker. I was regularly hit a by a bug leading to a gnome-shell crash (leading to a graphical session crash due to the design of wayland) and this time I decided that enough was enough so I started to dig in the code and did my best to fix the issues I encountered. During the month, I tested multiple versions and submitted three pull requests. Right now, the version in git is working fine for me. Still, it really smells of a bad design that mistakes in shell extensions can have such dramatic consequences.

Packaging. I forwarded #892063 to upstream in a new ticket. I updated zim to version 0.68 (final release replacing release candidate that I had already packaged). I filed #893083 suggesting that the hello source package should be a model for other packages and as such it should have a git repository hosted on salsa.debian.org.

Sponsorship. I sponsored pylint-django 0.9.4-1 for Joseph Herlant. I also sponsored urwid 2.0.1-1 (new upstream version), xlwt 1.3.0-1 (new version with python 3 support), elastalert 0.1.29-1 (new upstream release and RC bug fix) which have been updated for Freexian customers.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Julien Danjou: Is Python a Good Choice for Entreprise Projects?

4 April, 2018 - 16:35

A few weeks ago, one of my followers, Morteza, reached out and asked me the following:

I develop projects mostly with Python, but I am scared that Python is not a good choice for enterprise projects. In many cases, I've encountered a situation where Python performance was not sufficient, like thread spawning and so on, and as you know, the GIL supports one thread at the time.
Some friends told me to try to use Java, C++ or even Go for enterprise projects instead of Python. I see many job boards that require Python just for testing, QA or some small projects. I feel that Python is a small gun for showing my experiences and that I'd have to choose an alternative language.
As you are advanced and professional in many topics especially in Python, I'd need your advice. Is Python good enough for enterprise systems? Or should I choose an alternative language which fills the gaps that exist in Python?

If you follow me for a long time, you know I've been doing Python for more than ten years now and even wrote two books about it. So while I'm obviously biased, and before writing a reply, I would also like to take a step back and reassure you, dear reader, that I've used plenty of other programming languages those last 20 years: Perl, C, PHP, Lua, Lisp, Java, etc. I've built tiny to big projects with some of them, and I consider that Lisp is the best programming language. 😅 Therefore, I like to think that I'm not overly partial.

To reply to Morteza, I would say that you first need to acknowledge that a language itself is not slow or fast. English is not faster than French; however, some French people speak faster than English people.

So then, yes, CPython, the chief implementation of the Python programming language has some limitations: the GIL (Global Interpreter Lock) as Morteza says, is the most significant parallelism limiter. The rest of the language is being optimized regularly, and you can follow the work done in each Python version to see where this is going. CPython gets faster on each minor version.

On the other hand, don't think that Go or Java are miracles: they both have their limitations. For example, you can read this compelling presentation from Ben Bangert at Mozilla entitled "From Python to Go and back again". Ben explains some of the limitations that he encountered while switching to Go.

I'm sure you can find problems and limitations with the Java Virtual Machine too.

In Scaling Python, I wrote a few chapters covering the GIL and how you can circumvent its limitation. If you write widely scalable applications, the GIL is not such a big deal, as you need, anyway, to spread the load across multiple servers, not only on several processors.

There are tons of companies running Python applications at large scale, e.g. Instagram, Google and YouTube, Dropbox or PayPal.

Therefore, no, Python is not only for QA applications, no more than Java is only good for browser applets nor Go is for devops or whatever.

They all are different languages that approach problems from different angles. Depending on your mindset and on the solution that you want to implement, some might appear better equipped than others. Their virtual machines or compilers are marvelous, but also have their limitations and shortcomings that you need to be aware of so you can avoid falling into a big trap.

Of course, another approach is to remove all those issues by going down a layer and use a lower level language, e.g. C or C++. That'll remove those limitations for sure: no Python GIL, no Go resources leaking, no JVM startup slowness, etc. However, it'll add a ton of extra work and problems that YOU will have to solve – puzzles that are already resolved by higher-level languages. That's a matter of trade-offs: do you want to write a blazingly fast program in 10 years or do you want to write a decently fast program in 1 year? 😏

In the end, picking a language is not only a matter of performance but also a concern of support, community, and ecosystem. Picking battle-tested languages like Python and Java is the assurance of reliability and trustworthiness, while selecting a younger language like Rust might be an exciting ride. Doing some "reality check" is always worth considering before choosing a language. If you wanted to write an application that uses, e.g., AMQP and HTTP/2, are you sure that there are libraries providing those features and that are broadly used and supported? Or are you ready to commit time to maintain them yourself?

Again, Python is pretty solid here. Considering the extensive practice it has, there are tons of generously used libraries for everything you could ever need. The community is large and the ecosystem is flourishing.

In the end, I do think that yes, Python is a terrific choice for any enterprise projects, and considering the number of existing projects it counts, I'm not the only one thinking that way.

Feel free to share your experience – or even projects – in the comments section below!

Enrico Zini: Command line arguments are code

4 April, 2018 - 14:48
A story of yak shaving

I wanted to see the full pathname of the currently edited file in neovim, I found out it's ctrl+G.

But wouldn't it be nice to always see it?

sudo apt install vim-airline vim-airline-themes fonts-powerline
vim-addon-manager install vim-airline vim-airline-themes
echo "let g:airline_powerline_fonts = 1" >> ~/.vimrc

I recall one could also see the current git branch?

sudo apt install vim-fugitive
vim-addon-manager install vim-fugitive

Ooh, I remember I used to have pycodestyle highlighting in my code?

sudo apt install vim-syntastic
vim-addon-manager install vim-syntastic
A story of horror

Great! Uhm, wait, I recall syntastic had some pretty stupid checkers one needed to be careful of. Ah yes, it looks like I can whitelist them:

let g:syntastic_mode_map = { 'mode': 'active',
                           \ 'active_filetypes': ['python', 'cpp'],
                           \ 'passive_filetypes': [] }
let g:syntastic_python_checkers = ['flake8', 'python']
let g:syntastic_python_python_exec = '/usr/bin/python3'
let g:syntastic_cpp_checkers = ['gcc', 'clang-tidy']

Note, when I say "stupid", I mean something focusing way more on what can be done, rather than on what should be done.

I appreciate that a syntastic checker, that sends all of your current file to a remote website over http every time you open it or save it, can be written. I believe it should not be written, or at least not distributed.

Ok, now, how do I pass extra include dirs to clang-tidy? Ah, there is a config file system.

How does it work exactly?

Stupidly.

Lesson learned

Command line options should be treated as code, not as data.

Any configuration system that allows to inject arbitrary command line options to commands that get executed, should be treated as a configuration system that can inject arbitrary code into your own application. It can be a powerful tool, but it needs to be carefully secured against all sorts of exploits.

Jonathan Dowland: Third Annual UK System Research Challenges Workshop

4 April, 2018 - 13:39

A couple of weeks ago I gave a talk at the Third Annual UK System Research Challenges Workshop. This was my first conference attendance (let alone talk) in a while (3 years!), and my first ever academic conference, although the talk I gave was work-related: Containerizing Middleware Applications, (abstract, PDF slides, paper/notes). I also did a brief impromptu lightning-talk about software preservation, with Chocolate Doom as a case study.

The venue was a Country Club/Spa Hotel quite near to where I live. For the first time I managed to fit a swim in every morning I was there, something I'd like to repeat next time I'm away at a hotel with a pool.

It was great to watch some academic talks. The workshop is designed to be small and welcoming to new researchers. It was very useful to see what quality and level of detail fellow PhD students (further along with their research) are producing, and there were some very interesting talks (here's the programme)

Thanks to the sponsors (including my own employer) who made it possible.

I had considered giving a talk on my PhD topic, but it was not quite at the stage where I had something ready to share. I'm also aware I haven't written a word on it here either, and that's something I urgently want to address. My proposal is due quite soon and so I should have much to write about afterwards.

Reproducible builds folks: Reproducible Builds: Weekly report #153

4 April, 2018 - 00:30

Here's what happened in the Reproducible Builds effort between Sunday March 25 and Saturday March 31 2018:

Patches filed

In addition, build failure bugs were reported by:

  • "r. ductor" (1)
  • Adrian Bunk (42)
Reviews of unreproducible packages

59 package reviews have been added, 44 have been updated and 35 have been removed in this week, adding to our knowledge about identified issues.

One issue type was added:

Two issue types were removed:

  • randomness_in_gcj_output & gcj_captures_build_path. [...]
Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Neil McGovern: ED Update – Week 14

3 April, 2018 - 20:08

Last weekend, I was at LibrePlanet 2018, the FSF’s annual conference where I gave a talk about Free Software desktops and their continued importance. The videos are currently being uploaded, and there were some really interesting talks on a wide range of subjects. One particular highlight for me was that Karen Sandler (Software Freedom Conservancy ED, and former GNOME ED) won the Award for the Advancement of Free Software, which was very highly deserved. Additionally, the Award for Projects of Social Benefit went to Public Lab, who had a very interesting talk on attracting newcomers and building communities. They advocated the use of first-timers-only as a way to help introduce people to the project. It was good to catch up with GNOMErs and various people in the wider community.

I arrived a day early into Boston, as Deb Nicholson had kindly helped organise a SpinachCon. The idea behind these is to do some user testing and see actual people using GNOME. We were also accompanied by Dataverse and Debian. It was interesting to watch people try and accomplish some tasks (like “Set a wallpaper” and “start a screen recording”) and see what happens. This is probably worth a blog post all on its own, so I’ll write that up separately. For those who want a sneak peak, it wasn’t just usability improvements that could come out of it, but we discovered a couple of bugs as well.

Apart from that, both myself and Sriram Ramkrishna have been added as mods of reddit.com/r/gnome to help out there, and I gave a wide-ranging interview for Destination Linux!

Elana Hashman: A tale of three Debian build tools

3 April, 2018 - 11:00

Many people have asked me about my Debian workflow. Which is funny, because it's hard to believe that when you use three different build tools that you're doing it right, but I have figured out a process that works for me. I use git-buildpackage (gbp), sbuild, and pbuilder, each for different purposes. Let me describe why and how I use each, and the possible downsides of each tool.

Note: This blog post is aimed at people already familiar with Debian packaging, particularly using Vcs-Git. If you'd like to learn more about the basics of Debian packaging, I recommend you check out my Clojure Packaging Tutorial and my talk about packaging Leiningen.

git-buildpackage: the workhorse

I use git-buildpackage (aka gbp) to do the majority of my package builds. Because it runs by default without any special sandboxing from your base system, gbp builds things fast—much faster than other build tools—and as such, it's a great tool if you are iterating quickly to work bugs out of a build.

I usually invoke gbp like this:

gbp buildpackage -uc -us

The flags ensure that 1) we don't sign the .changes file generated (-uc) 2) we don't sign the .dsc file (-us), since typically I only want to sign build artifacts right before upload.

Other handy flags include

  • --git-ignore-new, which proceeds with a build even when you have an unclean working tree
  • --git-ignore-branch, to build when you've checked out a branch that's not master
  • --git-pristine-tar, to automatically check out and build with the corresponding pristine tar checked into the repository

Typically, if I use gbp to build binary packages, I will only do so in the confines of an updated, minimal sid LXC container, in order to reduce the risk of contaminating my build with stuff that's not available in a clean build environment.

gbp tastes great with sadt, a script in devscripts that runs your package's autopkgtests directly on your base system without root. You will need to install your test package and any dependencies before you can run sadt, but it requires much less setup and infrastructure than autopkgtest does.

***

So while gbp is awesome for the majority of my development workflow, I don't rely on the output of gbp when I want to upload a package, however clean I think my build LXC is. For uploads, I still use gbp, but exclusively to perform source-only (-S) builds:

gbp buildpackage -S -uc -us

Sometimes I may also need to pass the -d flag when building source packages to ignore installed build dependencies. By default, gbp won't proceed with a build when build dependencies are not installed, but when a said dependency is only needed for building the binary (and not the source) package, we can safely override this check.

Typically, when I'm uploading an update to a package, I'll just build the source package with gbp as above, sign the .changes and .dsc files, and complete a source-only upload so the buildds can handle all the build details.

sbuild: the gating build

"But wait!" you undoubtedly object. "How do you know your source package isn't full of garbage?!" That's a great question, because... I don't. So it behooves me to test it. And since the buildds use sbuild, why not use that for my own QA?

Setting up sbuild is a giant pain. You need a machine you have root access on, which wasn't the case for git-buildpackage.

To use sbuild without sudo, you'll need to add your user to the sbuild group:

sudo sbuild-adduser myusername

And create a schroot for builds:

sudo sbuild-createchroot unstable /srv/chroot/unstable-amd64-sbuild http://deb.debian.org/debian

If you ever need to update your sbuild schroot (you will), you can run:

sudo sbuild-update -udcar unstable-amd64-sbuild

I remember these flags by pronouncing them as one word, "ud-car", which sounds absurd and is hence memorable. I can never remember the name of my schroot, but I can look that up by running ls /srv/chroot.

Okay, time to build. Once we have our source package from gbp, we should have a .dsc file to pass to sbuild. We can build like so:

sbuild mypackage_1.0-1.dsc -d unstable -A --run-autopkgtest

We specify the target distribution with -d unstable, and -A ensures that our "arch: all" package will actually build (not necessary if your package targets specific architectures). To run the autopkgtests after a successful build, we pass --run-autopkgtest.

I don't like typing very much so I stick all these parameters in an .sbuildrc in my home directory. You can reference or copy the one in /usr/share/doc/sbuild/examples/example.sbuildrc because it's a Perl script that ends in 1;.

I add

$distribution = 'unstable';
$build_arch_all = 1;

# Optionally enable running autopkgtests
# $run_autopkgtest = 1;

so I don't have to type these in all the time. I don't enable autopkgtests by default because they prompt for a sudo password midway through the build (but perhaps that won't bother you, so feel free to uncomment those lines). Once we have our ~/.sbuildrc created, then we can just run

sbuild mypackage_1.0-1.dsc

Much better!

After my package successfully builds and tests, I take a quick look at the changes, build environment, and package contents. sbuild automatically prints these out, which is very convenient. If everything looks okay, I will sign the source package with debsign and upload it with dput (from dput-ng).

pbuilder: I need to upload binaries!

One irritation I have with sbuild is I can never figure out the right flags to get the right build artifacts to do a binary upload. Its defaults are too minimal for sending to NEW without some additional fancy incantations (it doesn't include the package tarball, only the buildinfo and produced .deb), and I have a hard enough time remembering the flags that I listed above. Remember, this is what the manpage for sbuild looks like:

MANPAGE OF THE DAY: sbuild https://t.co/e7nwB5PUUZ

...this was a little traumatizing tbh pic.twitter.com/zSSEbe4ROH

— e. hashman (@ehashdn) December 24, 2017

So when I have to upload a NEW package, I usually use pbuilder.

There are two ways to invoke pbuilder. The first is the easiest, but "not recommended" for uploads by the manpage: simply navigate to the root of the repository you want to build and run pdebuild.

pdebuild

Wow, that's the simplest thing we've run yet! Why don't we run it all the time?! Well, because of the way pbuilder sets up its filesystem, it can be slower than sbuild, so I've moved it out of my development workflow. It also requires my sudo password, and as I mentioned earlier, I don't particularly like having to enter that mid-build.

The second way I usually apply when I need to upload a build: invoking pbuilder directly. Like with sbuild, we need to provide it a .dsc, so we should build a source package first. However, pbuilder is smarter than sbuild and doesn't need me to give it architectures and target distros and whatnot, so there is significantly less headache if I haven't tweaked a personal configuration. With pbuilder, I can run

sudo pbuilder build mypackage_1.0-1.dsc

and I get a package!

One of the annoying things about pbuilder is that it doesn't output files in my current build directory. Instead, by default, it places build artifacts inside /var/cache/pbuilder/result. So I always have to remember to copy things out of there before I upload.

Also, pbuilder doesn't print out some build information that I should check over before uploading, so I have to do that manually with dpkg:

# Print out package information, including dependencies
dpkg -I libmypackage_1.0-1_all.deb

# List all contents of the package
dpkg -c libmypackage_1.0-1_all.deb

Now I can go ahead and sign my built .changes file and perform an upload!

In summary

Here are what I'd say the pros and cons of each of these three build tools I run are.

git-buildpackage

Pros:

  • Super speedy (gotta go fast)
  • Uses version control
  • No rootz

Cons:

  • Way too much typing
  • Constantly have to run d/rules clean or pass --git-ignore-new
  • Can't produce production build artifacts
sbuild

Pros:

  • Supposedly "fast"
  • Prints helpful information once the build is complete
  • Runs autopkgtests with no marginal effort
  • Only needs root like every third time I run it
  • Conforms with the buildds

Cons:

  • Setup is way too complicated
  • Manpage is terrifying
  • Doesn't actually give me the build artifacts I want
pbuilder

Pros:

  • The least typing!!!
  • Gets me all the build artifacts I want
  • Not the reason I get rejected by FTP Master

Cons:

  • Slow
  • Sticks build artifacts into /var/run/wheretheheck
  • People will yell at you for not using sbuild
cowbuilder, qemubuilder, whalebuilder, mylittlepersonalbuilder, etc.

Pros:

  • I don't use these.

Cons:

  • I don't use these.

Hope you enjoyed the tour. Happy building!

References

François Marier: Looking back on starting Libravatar

3 April, 2018 - 08:00

As noted on the official Libravatar blog, I will be shutting the service down on 2018-09-01.

It has been an incredible journey but Libravatar has been more-or-less in maintenance mode for 5 years, so it's somewhat outdated in its technological stack and I no longer have much interest in doing the work that's required every two years when migrating to a new version of Debian/Django. The free software community prides itself on transparency and so while it is a difficult decision to make, it's time to be upfront with the users who depend on the project and admit that the project is not sustainable in its current form.

Many things worked well

The most motivating aspect of running Libravatar has been the steady organic growth within the FOSS community. Both in terms of traffic (in March 2018, we served a total of 5 GB of images and 12 GB of 302 redirects to Gravatar), integration with other sites and projects (Fedora, Debian, Mozilla, Linux kernel, Gitlab, Liberapay and many others), but also in terms of users:

In addition, I wanted to validate that it is possible to run a FOSS service without having to pay for anything out-of-pocket, so that it would be financially sustainable. Hosting and domain registrations have been entirely funded by the community, thanks to the generosity of sponsors and donors. Most of the donations came through Gittip/Gratipay and Liberapay. While Gratipay has now shut down, I encourage you to support Liberapay.

Finally, I made an effort to host Libravatar on FOSS infrastructure. That meant shying away from popular proprietary services in order to make a point that these convenient and well-known services aren't actually needed to run a successful project.

A few things didn't pan out

On the other hand, there were also a few disappointments.

A lot of the libraries and plugins never implemented DNS federation. That was the key part of the protocol that made Libravatar a decentralized service but unfortunately the rest of the protocol was must easier to implement and therefore many clients stopped there.

In addition, it turns out that while the DNS system is essentially a federated caching system for IP addresses, many DNS resolvers aren't doing a good job caching records and that created unnecessary latency for clients that chose to support DNS federation.

The main disappointment was that very few people stepped up to run mirrors. I designed the service so that it could scale easily in the same way that Linux distributions have coped with increasing user bases: "ftp" mirrors. By making the actual serving of images only require Apache and mod_rewrite, I had hoped that anybody running Apache would be able to add an extra vhost to their setup and start serving our static files. A few people did sign up for this over the years, but it mostly didn't work. Right now, there are no third-party mirrors online.

The other aspect that was a little disappointing was the lack of code contributions. There were a handful from friends in the first couple of months, but it's otherwise been a one-man project. I suppose that when a service works well for what people use it for, there are less opportunities for contributions (or less desire for it). The fact dev environment setup was not the easiest could definitely be a contributing factor, but I've only ever had a single person ask about it so it's not clear that this was the limiting factor. Also, while our source code repository was hosted on Github and open for pull requests, we never even received a single drive-by contribution, hinting at the fact that Github is not the magic bullet for community contributions that many people think it is.

Finally, it turns out that it is harder to delegate sysadmin work (you need root, for one thing) which consumes the majority of the time in a mature project. The general administration and maintenance of Libravatar has never moved on beyond its core team of one. I don't have a lot of ideas here, but I do want to join others who have flagged this as an area for "future work" in terms of project sustainability.

Personal goals

While I was originally inspired by Evan Prodromou's vision of a suite of FOSS services to replace the proprietary stack that everybody relies on, starting a free software project is an inherently personal endeavour: the shape of the project will be influenced by the personal goals of the founder.

When I started the project in 2011, I had a few goals:

This project personally taught me a lot of different technologies and allowed me to try out various web development techniques I wanted to explore at the time. That was intentional: I chose my technologies so that even if the project was a complete failure, I would still have gotten something out of it.

A few things I've learned

I learned many things along the way, but here are a few that might be useful to other people starting a new free software project:

  • Speak about your new project at every user group you can. It's important to validate that you can get other people excited about your project. User groups are a great (and cheap) way to kickstart your word of mouth marketing.

  • When speaking about your project, ask simple things of the attendees (e.g. create an account today, join the IRC channel). Often people want to support you but they can't commit to big tasks. Make sure to take advantage of all of the support you can get, especially early on.

  • Having your friends join (or lurk on!) an IRC channel means it's vibrant, instead of empty, and there are people around to field simple questions or tell people to wait until you're around. Nobody wants to be alone in a channel with a stranger.

Thank you

I do want to sincerely thank all of the people who contributed to the project over the years:

  • Jonathan Harker and Brett Wilkins for productive hack sessions in the Catalyst office.
  • Lars Wirzenius, Andy Chilton and Jesse Noller for graciously hosting the service.
  • Christian Weiske, Melissa Draper, Thomas Goirand and Kai Hendry for running mirrors on their servers.
  • Chris Forbes, fr33domlover, Kang-min Liu and strk for writing and maintaining client libraries.
  • The Wellington Perl Mongers for their invaluable feedback on an early prototype.
  • The #equifoss group for their ongoing suppport and numerous ideas.
  • Nigel Babu and Melissa Draper for producing the first (and only) project stikers, as well as Chris Cormack for spreading so effectively.
  • Adolfo Jayme, Alfredo Hernández, Anthony Harrington, Asier Iturralde Sarasola, Besnik, Beto1917, Daniel Neis, Eduardo Battaglia, Fernando P Silveira, Gabriele Castagneti, Heimen Stoffels, Iñaki Arenaza, Jakob Kramer, Jorge Luis Gomez, Kristina Hoeppner, Laura Arjona Reina, Léo POUGHON, Marc Coll Carrillo, Mehmet Keçeci, Milan Horák, Mitsuhiro Yoshida, Oleg Koptev, Rodrigo Díaz, Simone G, Stanislas Michalak, Volkan Gezer, VPablo, Xuacu Saturio, Yuri Chornoivan, yurchor and zapman for making Libravatar speak so many languages.

I'm sure I have forgotten people who have helped over the years. If your name belongs in here and it's not, please email me or leave a comment.

Thorsten Alteholz: My Debian Activities in March 2018

3 April, 2018 - 00:03

FTP master

This month I accepted 252 packages and rejected 23 uploads. The overall number of packages that got accepted this month was 308.

I also took care of #890944.

Debian LTS

This was my forty fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.25h. During that time I did LTS uploads of:

    [DLA 1313-1] isc-dhcp security update for two CVEs
    [DLA 1312-1] libvorbisidec security update for one CVE
    [DLA 1333-1] dovecot security update for three CVEs
    [DLA 1334-1] mosquitto security update two CVEs
    [DSA 4152-1] mupdf security update for two Jessie CVEs and two Stretch CVEs

I also prepared a test package for wireshark, fixing 12 CVEs. I am still waiting for feedback :-).

The issues for mupdf did not affect Wheezy, so there has been no DLA. Instead the security team accepted my debdiff for Jessie and Stretch and published a DSA. Thanks to Luciano for doing this.
As it turned out, the patch I found for icu last month had been the correct one. But as it did not affect Wheezy, there has been no DLA as well.

Last but not least I did one week of frontdesk duties.

Other stuff

During march I did uploads of …

  • libctl to fix a FTBFS during binary-indep-only build

I also moved all oauth2 related packages as well as cd5 to salsa.

Last but not least I took care of some old bugs in apcupsd that no longer seem to be relevant.

Manuel A. Fernandez Montecelo: Debian GNU/Linux port for RISC-V 64-bit (riscv64) in Debian infrastructure (debian-ports)

2 April, 2018 - 08:20

tl;dr: We have a new port for RISC-V, flavour riscv64 (64-bits little-endian) in Debian Ports.

And it's doing well.

Debian-Ports 2-week Graph, 2018-03-31

(PS: Despite the date still in some timezones... no, this is not an April-Fools-day type of post :-) ).

Ancient History

Almost a year ago I wrote a post announcing the availability of a Debian GNU/Linux port for RISC-V 64-bit (riscv64).

At the time it was an incomplete Debian system, with several of the most important pieces missing (toolchain: gcc, glibc, binutils), and with everything kept outside the Debian infrastructure.

The main reason for that was because support for the toolchain had not been submitted upstream, and the ABI was not very stable. It was changing often and in important and disruptive ways, without even as much of a public announcement before the breakage was noticed.

Last pieces of the toolchain upstreamed, at long last

Fortunately, over the last year, support has been merged in GNU binutils first (2.28), then GCC (7), then Linux 4.15 (pre-requisite for glibc), and finally in glibc 2.27, released in the first days of February.

Support is still not complete, for example riscv32 targets have not been merged in Linux mainline (but this is not important for us, unless someone else wants to start such a port), basic drivers are still missing from mainline, the support in GDB and Qemu is becoming available only now, there are many bugs that need shaking, etc.

Bootstrapping again, 2018 edition

Despite the very recent support, the ecosystem and software support is mature enough that we could, with the only help of qemu-system-riscv64, progress through the following steps:

1. Cross-compile a base set of packages

Dates: prior work for a few years (almost since the public announcement of RISC-V, in 2014); current bootstrapping since end of Feb until ~5th of March.

Apart from previous bootstrapping attempts, during the last months prior to proper starting this one I worked on clearing the path, for example by NMUing (Debian lingo for fixing without “ack” from maintainer, no reply in weeks/months, or “ack” agreeing that I went ahead) packages with patches proposed by Helmut and porters of other architectures (e.g. many from Daniel Schepler) and without any reply for a long time, or sometimes proposing new patches or new ways to solve the problems, etc.

After that, at the end of February and along with Karsten and Aurélien, we went through a round of cross-compiling, mostly with the help of Helmut Grohne's rebootstrap, but also solving additional problems like using gcc-8 rather than gcc-7 because it failed to work properly for a few days, also pulling parts of the toolchain from experimental instead of unstable and having different versions around causing conflicts, an experimental package of glibc 2.27 still not present in unstable at the time, which rendered packages such as GNU make unable to compile without patching, etc.

We also cross-compiled a significantly higher amount of packages than those considered by rebootstrap at the moment; some of them with crude hacks, like removing build-dependencies or avoiding to build man pages when they needed to run riscv64 native code for that. Sometimes is just faster to cross-compile and have many packages ready for when they are needed needed, even if sometimes cross-built packages don't work 100% correctly or not at all.

Many of those package are not essential to run a Debian system, but are needed later in the initial steps of “native” building. For example, the compiler/toolchain itself, which the rebootstrap script doesn't attempt to provide; or GNU make or cmake, which are often not needed to run a Linux system, but are required to build a great deal of software.

As a result of this phase, we have submitted bugs (or in some cases, uploads after a period of silence), with many still pending to send or apply, to try to sort these problems out in a clean way for other ports in the future (and better rebootstrapping and cross-compilation support in Debian overall), with the use of different tools like build profiles to avoid unwanted dependencies or to break dependency cycles, or moving the generation of documentation to an arch:all package, etc (so it's done once for all architectures, and riscv64 or any other arch-builders don't have to bother about it).

2. Use this set to launch “native” systems with qemu-system-riscv64

Dates: ~5th to ~13th of March.

Having built a “native” system from cross-built packages, we could compile packages natively that were difficult to cross-build, and also to run tests for packages providing them (so one can have some certainty that the package behaves sanely), and trying to have as much a clean native system as possible.

For example, one of the first packages needed and that we built natively is Perl (which is pretty basic for any Debian systems, and that we couldn't get to cross-compile easily).

We recompiled all (or almost all) packages that we got cross-built “natively”, and by running tests when possible, we uncovered some latent bugs in the toolchain and qemu, and in some packages.

The result was a set of packages (that we named “native-1”) enough to run systems equivalent to those installed in machines acting as automatic builders for other architectures, capable of building and running native "sbuild", "schroot", etc. We had to cut some unimportant dependency or feature here and there, but those changes did not affect the packages of the next set, or only marginally.

3. Build “native-2” to import to debian-ports

Dates: ~13th to ~23rd of March.

With systems installed from “native-1”, we have built again a restricted set of packages for a very basic Debian system, to act as “seed”, with as few differences as possible with the current state of Debian unstable, with the aim to import them in the Debian-ports infrastructure as cleanly as possible.

Intentionally, we built as few packages as possible, so most of the packages would go through the standard mechanisms.

A few packages have been built with the help of build-profiles support, for example to avoid dependencies on Java's default-jdk (openjdk-9-jdk at the moment), because we do not have that one yet.

Another typical case is disabling tests, after attemping to pass them, because a few cases failed among hundreds of them passed, often due to timeouts (system emulation being quite slow, this happens very often).

But most packages have been built completely unmodified; and those who were, will be rebuilt later when their dependencies are satisfied.

4. Import the base set “native-2” as “seed” and set-up automatic systems

Dates: ~23rd to ~25th of March.

Set everything up to build the whole archive in as much of a standard and unattended way as possible.

5. Currently, solve problems like breaking dependency cycles

Dates: ~25th of March, still on it.

In the current phase, the process as a whole is not “unattended”, because we have to build packages with a reduced set of features or build-dependencies, to break cycles (e.g. cmake depending on qt for its GUI tool, qt ecosystem depending on cmake).

To get close to 100% of the archive built, there are many important dependency cycles yet to break, some packages need patches specific to the architecture, etc.

Nevertheless, thousands of packages have been built by now, most of them completely unattended and in the same way as the other architectures available in Debian.

Current Status

In last few days and weeks, status was sometimes changing by the hour. That, along with -ENOTIME, are the main reasons why this post has been delayed until now, and the news have not been publicised widely.

We just focused on getting things done :-)

But to give a general idea of where we are now:

  • Over 4100 packages (> 30% of the source-packages of the whole archive that are not arch-independent) have been built by now.
    • Since arch-independent packages can be installed in this architecture (e.g. many modules of languages like Python, documentation, etc.), about 65 to 70% of the Debian archive is available to use.
    • As explained before, most of the packages have been built completely unattended and in the same way as the other architectures available in Debian.
    • Progress of automatic builds as it happens: https://buildd.debian.org/status/architecture.php?a=riscv64&suite=sid
  • All of this work is not tested on real hardware yet.
    • Outside of FPGA implementations, the first hardware publicly/easily available will only surface in the next few weeks, only in small quantities and expensive). We hope, and in particular I have a great deal of confidence, that the software will basically work, but... yeah.
  • We find still fairly often tests that fail to pass or get locked-up in qemu, etc.
    • It's difficult to know if these are bugs in the software, compiler/toolchain, emulation, or a combination of them.
    • Still, since we got over 30% of the arch-dependent packages of a huge collection like Debian built, passing tests for most of them when available, things do not look too bad.
  • We still are breaking dependency cycles that prevent many packages to be built.
    • For example we do not have almost any software depending on complex GUI (think of KDE, GNOME, browsers), there is still no support for languages such as Java, etc.
Who

Listing people working on different areas and at different times:

  • Karsten Merker and I for a long time, participating in RISC-V mailing lists and some events
  • Karsten Merker and I cross-building and using rebootstrap, Karsten is still focusing on that area
  • Aurélien Jarno helping more actively since the preparation of pre-releases of glibc 2.27, and other parts of the tool-chain during cross-compilation
  • Aurélien Jarno and I working closely together in the last phase of cross-compilation and fine-tuning of the toolchain; then to build the native sets of packages to bring riscv64 to debian-ports, setting-up buildds, etc.
  • Additional support by Kurt Keville of the MIT RV128 team for a couple of years now
  • Bytemark, one of the big sponsors of Debian (and other FOSS projects), sponsored a machine where I did most of the work during the previous years and up to the packages built to import into Debian ports

Apart from these people/resources, we have on our side:

  • all the people working on cross-compilation and multi-arch over the years (Standing on the Shoulders of Giants, as if it were)
  • Helmut Grohne's focus and tenacity on rebootstrap over the years, fixing problems all along (already mentioned previously)
  • many maintainers adding support eagerly to their packages (sometimes not trivial at all)
  • people starting to submit patches and requests to have packages with RISC-V support ready to install, e.g. qemu-system-riscv64
  • and various people offering help and giving support along the way, too many to list here and for my poor brain to remember.
How to Use

Download packages or create your own image from: https://wiki.debian.org/RISC-V#Package_repositories

We will try to provide pre-built images or easier ways to test in the near future.

How to Help

Please refer to:

Future Work

Usual stuff for other ports, like:

  1. Be able to build most of the archive
  2. Create images to install or ready to use in qemu, etc.
  3. Make it a first-class, full-supported architecture in future Debian Stable releases

Russ Allbery: remctl 3.14

2 April, 2018 - 05:52

remctl is a client/server protocol supporting remote execution of specific configured commands using GSS-API or ssh for authentication and encryption.

This is a minimal release that fixes a security bug introduced in 3.12, discovered by Santosh Ananthakrishnan. A remctl client with the ability to run a server command with the sudo configuration option may be able to corrupt the configuration of remctld to run arbitrary commands, although I believe this would be moderately difficult to do. Only remctld (not remctl-shel) is vulnerable, and only if there are commands using the sudo configuration option.

There is a more formal security advisory as well.

If you are running remctl 3.12 and 3.13, I recommend upgrading, although there should be no security consequences if you are not using the sudo configuration option. Fixed packages have been uploaded for Debian stable (stretch) and unstable.

You can get the latest version from the remctl distribution page.

Benjamin Mako Hill: Workshop on Casual Inference

2 April, 2018 - 04:55

My research collective, the Community Data Science Collective,  just announced that we’ll be hosting a event on casual inference in online community research!

We believe this will be the first event on casual inference in any field. We look forward to relaxing our assumptions, and so much more!

Bits from Debian: DebConf20 in a cruise

2 April, 2018 - 01:15

The last editions of DebConf, the annual Debian conference, have been in unalike places like Heidelberg (Germany), Cape Town (South Africa) and Montreal (Canada). Next summer DebConf18 will happen in Hsinchu (Taiwan) and the location for DebConf19 is already decided: Curitiba (Brazil). During all these years an idea has been floating in the air (aka the Debian IRC channels) about organising a DebConf in a cruise. Today, the Debian Project is happy to announce that a group of Debian contributors have teamed-up to propose an actual bid for DebConf20 in a cruise.

The Cruise Team is confident about their ability to provide a detailed and strong bid by the end of the year. However, a brief plan and preparation is already done: the conference would happen in July and August 2020, during a trip around the world in a "rolling conference" scheme. This means that Debian contributors could choose when to arrive and leave by embarking/disembarking in one of the harbours the boat will stop. A DebCamp focused in sprinting the development of Debian blends and an "Open Day" with install parties under the sea and other interesting activities for the wide public is also planned.

There will be a sprint to discuss the bid details during DebConf18 in Hsinchu. The team has also initiated conversations with several cruise ship companies and satellite network providers in order to explore the possible venues and connectivity options for the conference. Interested parties can contact press@debian.org to join the Cruise Team in the preparation of the future conference.

David Kalnischkies: APT for DPL Candidates

1 April, 2018 - 19:31

Today is a special day for apt: 20 years ago after much discussion in the team as well as in the Debian project at large "APT" was born.

What happened in all these years? A lot! But if there is one common theme then it is that many useful APT features, tricks and changes are not as known to the general public or even most Debian Developers as they should be.

A few postings are unlikely to change that, but I will try anyhow and this post is the start of a mini-series of "APT for …" articles showing off things: For the first installment I want to show nothing less but the longest running vote-rigging scheme in the known free (software) world. But lets start from the beginning:

Humans pride themselves having evolved over simple animals following their instincts by having a mind of their own to form decisions. On this concept, humans hold votes to agree upon stuff, including the Debian Project Leader for the next year.

DPL candidates are encouraged to provide a platform and a discussion between the candidates and the voters ensures that everyone can form a well informed opinion on the candidates in question and based on this information choose a candidate with their own free will.

Or, That is at least the theory Debian Developers want to believe in.

The following table tallies each leader vote since 1999 with the information if the candidate (68 over all, 37 unique) had contributed to APT (31/13), dpkg (29/17) or both (18/9) for cases I could identify (if I missed anything feel free to get in touch). The candidate in bold has won the election in the given year, candidates in italics won a later election:

Year Candidate APT? dpkg?   1999 Joseph Carter no no Ben Collins no1 yes Wichert Akkerman no yes Richard Braakman no yes   2000 Ben Collins no1 yes Wichert Akkerman no yes Joel Klecker no yes Matthew Vernon no yes   2001 Branden Robinson yes2 yes Anand Kumria no yes Ben Collins yes1 yes Bdale Garbee yes3 no   2002 Branden Robinson yes2 yes Raphaël Hertzog yes4 yes Bdale Garbee yes3 no   2003 Moshe Zadka no no Bdale Garbee yes3 no Branden Robinson yes2 yes Martin Michlmayr yes5 no   2004 Martin Michlmayr yes5 no Gergely Nagy no no Branden Robinson yes2 yes   2005 Matthew Garrett no no Andreas Schuldei no yes Angus Lees no no Anthony Towns yes6 yes Jonathan Walther no no Branden Robinson yes2 yes   2006 Jeroen van Wolffelaar yes7 yes Ari Pollak no no Steve McIntyre yes8 no Anthony Towns yes6 yes Andreas Schuldei no yes Jonathan (Ted) Walther no no Bill Allombert no yes   2007 Wouter Verhelst no no Aigars Mahinovs no no Gustavo Franco no no Sam Hocevar yes10 yes Steve McIntyre yes8 no Raphaël Hertzog yes4 yes Anthony Towns yes6 yes Simon Richter yes9 yes   2008 Marc Brockschmidt no no Raphaël Hertzog yes4 yes Steve McIntyre yes8 no   2009 Stefano Zacchiroli yes11 no Steve McIntyre yes8 no   2010 Stefano Zacchiroli yes11 no Wouter Verhelst no no Charles Plessy yes12 yes Margarita Manterola no yes   2011 Stefano Zacchiroli yes11 no   2012 Wouter Verhelst no no Gergely Nagy no no Stefano Zacchiroli yes11 no   2013 Gergely Nagy no no Moray Allan no no Lucas Nussbaum no no   2014 Lucas Nussbaum no no Neil McGovern no no   2015 Mehdi Dogguy no no Gergely Nagy no no Neil McGovern no no   2016 Mehdi Dogguy no no   2017 Mehdi Dogguy no no Chris Lamb yes13 yes   2018 Chris Lamb14 yes13 yes

We can see directly that until recently it was nearly mandatory to have contributed to apt or dpkg to be accepted as a DPL candidate. The recent no streak before Chris entered the table gets doubly weird if we factor in that I joined Debian and the APT project around 2009… We might get to the bottom of this coincident in the future, but for now lets get back to the topic at hand:

DDs have no free will

It is hard to accept for a human, but the table above shows that DDs aren't as free in their choice as they think they are; they follow a simple rule:

If at least one of the candidates contributed to APT, an APT contributor wins the election.

You can read this directly from the table above (20 votes total, including 6 votes without an apt candidate). Interestingly, the same rule for dpkg does not hold at all. In fact there are years in which the defining difference for the winning candidate is that he15 hasn't contributed to dpkg but only to apt (e.g. 2002).

Praying via bugreports and sacrifices in the form of patches in the Pantheon of the Supercow, the deity@ mailinglist, can have profound effects: Take the very first elections as an example: After being an unsuccessful candidate in 1999 and 2000, candidate Ben implemented the rsh transport method for apt and as a result became DPL in 2001.

And as if that wouldn't be enough, being on good terms with Supercow has additional benefits:

Contribution beats Re-Election

While it seems like DPLs are granted a second term if they wish the recent 2017 election shows that contributing to APT is stronger. Two other non-re-elections are on record: In 2003 where the bugreporter Bdale lost against the patchprovider Martin, so contribution size and recency seem to play a role as well, but that might not be everything there is to infights between contributors as shown in 2007 where Anthony lost against Sam in the biggest vote so far with 5 out of 8 candidates supported by apt (including the present and future DPL in this year).

The "Super" rubs of on the DPL

Many DPLs run for two terms, but only a single one managed to run a third time: After unsuccessful campaigning in 2009 Stefano not only worked on apt in 2010 and the following year(s) but also consulted with a certain highpriest of the Supercow netting a record three-year stint as DPL as a result.

It is to be seen if intercession of a highpriest is needed for long DPL terms, but it certainly doesn't hurt – and I make myself of course selflessly available (for a reasonable monetary offering) as said highpriest should a DPL (candidate) be in need of my divine bovine help (again)…

Every contribution matters – for real!

No contribution is too small, everything counts & supercow sees everything. Even "just" downgrading the severity of a bug counts10. Supercow values all contributors: Join their rank today and win the next DPL election in 2019!

Of course, supercow likes big and groundbreaking patches as much as the next project, but while other projects are just talking about how they like testers, bugreporters, translators and documentation improvers we in the apt project have the election-rigging data of 20 years to proof it! Supercow really cares for its contributors!

So to all past, present and future DPL candidates: Thanks for your contributions to APT (and Debian)!

That… this… what the f…edora?!?

Look at the calendar: Its not only easter sunday, its also the beginning of voting period for DPL 2018. What better day would there be for some fun about humans, genesis, elections and the 20th birthday of apt.

I promise that future installments in the series will be more practically useful but until then enjoy the festive days (if applicable) around apts birthday, have fun, take care and make sure to contribute to apt!

Mooooo!

  1. Contributed the RSH/SSH method in 2000. Won the following election after two unsuccessful rounds. 

  2. Credited in AUTHORS for "Man Page Documentation" 

  3. Early tester as shown in e.g. #45050 

  4. Bugreporter and provider of draft patch, e.g. #793360 

  5. Bugreporter and patch provider, e.g. #417090 

  6. Multiple patches over the years since 2005 including the latest reimplementation of rred 

  7. Bugreporter and patch provider: #384182 

  8. Bugreporter and tester, e.g. #218995 

  9. Bugreporter, tester and patch provider, e.g. #509866 

  10. RC bug triager, e.g. #454666 

  11. Multiple bugreports and patches, including pushing & documenting EDSP 

  12. Documentation patches, e.g. #619088 

  13. Tester and patch provider, e.g. #848721 

  14. The vote hasn't happened yet, but NOTA is by definition not an apt contributor and hence can't win as outlined in this post. 

  15. To this day Debian had no DPL for which "he" does not apply. With this information, you might be able to change this in future elections! 

Norbert Preining: TeX Live 2018 (pretest) hits Debian/experimental

16 March, 2018 - 11:27

TeX Live 2017 has been frozen and we have entered into the preparation phase for the release of TeX Live 2018. Time to update also the Debian packages to the current status.

The other day I have uploaded the following set of packages to Debian/experimental:

  • texlive-bin 2018.20180313.46939-1
  • texlive-base, texlive-lang, texlive-extra 2018.20180313-1
  • biber 2.11-1

This brings Debian/experimental on par with the current status of TeX Live’s tlpretest. After a bit of testing and the sources have stabilized a bit more I will upload all the stuff to unstable for broader testing.

This year hasn’t seen any big changes, see the above linked post for details. Testing and feedback would be greatly appreciated.

Enjoy.

Clint Adams: Don't feed them after midnight

15 March, 2018 - 19:51

“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

“Hold on while I fellate this 魔鬼,” announced Adrian.

Posted on 2018-03-15 Tags: bgs

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้