Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 15 min 23 sec ago

Chris Lamb: Tour d'Orwell: The River Orwell

15 October, 2019 - 07:19

Continuing my Orwell-themed peregrination, a certain Eric Blair took his pen name "George Orwell" because of his love for a certain river just south of Ipswich, Suffolk. With sheepdog trials being undertaken in the field underneath, even the concrete Orwell Bridge looked pretty majestic from the garden centre — cum — food hall.

Martin Pitt: Hardening Cockpit with systemd (socket activation)³

15 October, 2019 - 07:00
Background A major future goal for Cockpit is support for client-side TLS authentication, primarily with smart cards. I created a Proof of Concept and a demo long ago, but before this can be called production-ready, we first need to harden Cockpit’s web server cockpit-ws to be much more tamper-proof than it is today. This heavily uses systemd’s socket activation. I believe we are now using this in quite a unique and interesting way that helped us to achieve our goal rather elegantly and robustly.

Arturo Borrero González: What to expect in Debian 11 Bullseye for nftables/iptables

15 October, 2019 - 00:00

Debian 11 codename Bullseye is already in the works. Is interesting to make decision early in the development cycle to give people time to accommodate and integrate accordingly, and this post brings you the latest update on the plans for Netfilter software in Debian 11 Bullseye. Mind that Bullseye is expected to be released somewhere in 2021, so still plenty of time ahead.

The situation with the release of Debian 10 Buster is that iptables was using by default the -nft backend and one must explicitly select -legacy in the alternatives system in case of any problem. That was intended to help people migrate from iptables to nftables. Now the question is what to do next.

Back in July 2019, I started an email thread in the debian-devel@lists.debian.org mailing lists looking for consensus on lowering the archive priority of the iptables package in Debian 11 Bullseye. My proposal is to drop iptables from Priority: important and promote nftables instead.

In general, having such a priority level means the package is installed by default in every single Debian installation. Given that we aim to deprecate iptables and that starting with Debian 10 Buster iptables is not even using the x_tables kernel subsystem but nf_tables, having such priority level seems pointless and inconsistent. There was agreement, and I already made the changes to both packages.

This is another step in deprecating iptables and welcoming nftables. But it does not mean that iptables won’t be available in Debian 11 Bullseye. If you need it, you will need to use aptitude install iptables to download and install it from the package repository.

The second part of my proposal was to promote firewalld as the default ‘wrapper’ for firewaling in Debian. I think this is in line with the direction other distros are moving. It turns out firewalld integrates pretty well with the system, includes a DBus interface and many system daemons (like libvirt) already has native integration with firewalld. Also, I believe the days of creating custom-made scripts and hacks to handle the local firewall may be long gone, and firewalld should be very helpful here too.

Ritesh Raj Sarraf: Bpfcc New Release

14 October, 2019 - 16:24
BPF Compiler Collection 0.11.0

bpfcc version 0.11.0 has been uploaded to Debian Unstable and should be accessible in the repositories by now. After the 0.8.0 release, this has been the next one uploaded to Debian.

Multiple source respositories

This release brought in dependencies to another set of sources from the libbpf project. In the upstream repo, this is still a topic of discussion on how to release tools where one depends on another, in unison. Right now, libbpf is configured as a git submodule in the bcc repository. So anyone using the upstream git repoistory should be able to build it.

Multiple source archive for a Debian package

So I had read in the past about Multiple source tarballs for a single package in Debian but never tried it because I wasn’t maintaining anything in Debian which was such. With bpfcc it was now a good opportunity to try it out. First, I came across this post from RaphaĂŤl Hertzog which gives a good explanation of what all has been done. This article was very clear and concise on the topic

Git Buildpackage

gbp is my tool of choice for packaging in Debian. So I did a quick look to check how gbp would take care of it. And everything was in place and Just Worked

rrs@priyasi:~/rrs-home/Community/Packaging/bpfcc (master)$ gbp buildpackage --git-component=libbpf
gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig.tar.gz
gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig-libbpf.tar.gz
gbp:info: Performing the build
dpkg-checkbuilddeps: error: Unmet build dependencies: arping clang-format cmake iperf libclang-dev libedit-dev libelf-dev libzip-dev llvm-dev libluajit-5.1-dev luajit python3-pyroute2
W: Unmet build-dependency in source
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: info: applying fix-install-path.patch
dh clean --buildsystem=cmake --with python3 --no-parallel
   dh_auto_clean -O--buildsystem=cmake -O--no-parallel
   dh_autoreconf_clean -O--buildsystem=cmake -O--no-parallel
   dh_clean -O--buildsystem=cmake -O--no-parallel
dpkg-source: info: using source format '3.0 (quilt)'
dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig-libbpf.tar.gz
dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig.tar.gz
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: warning: ignoring deletion of directory src/cc/libbpf
dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.debian.tar.xz
dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.dsc
I: Generating source changes file for original dsc
dpkg-genchanges: info: including full source code in upload
dpkg-source: info: unapplying fix-install-path.patch
ERROR: ld.so: object 'libeatmydata.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
W: cgroups are not available on the host, not using them.
I: pbuilder: network access will be disabled during build
I: Current time: Sun Oct 13 19:53:57 IST 2019
I: pbuilder-time-stamp: 1570976637
I: Building the build Environment
I: extracting base tarball [/var/cache/pbuilder/sid-amd64-base.tgz]
I: copying local configuration
I: mounting /proc filesystem
I: mounting /sys filesystem
I: creating /{dev,run}/shm
I: mounting /dev/pts filesystem
I: redirecting /dev/ptmx to /dev/pts/ptmx
I: Mounting /var/cache/apt/archives/
I: policy-rc.d already exists
W: Could not create compatibility symlink because /tmp/buildd exists and it is not a directory
I: using eatmydata during job
I: Using pkgname logfile
I: Current time: Sun Oct 13 19:54:04 IST 2019
I: pbuilder-time-stamp: 1570976644
I: Setting up ccache
I: Copying source file
I: copying [../bpfcc_0.11.0-1.dsc]
I: copying [../bpfcc_0.11.0.orig-libbpf.tar.gz]
I: copying [../bpfcc_0.11.0.orig.tar.gz]
I: copying [../bpfcc_0.11.0-1.debian.tar.xz]
I: Extracting source
dpkg-source: warning: extracting unsigned source package (bpfcc_0.11.0-1.dsc)
dpkg-source: info: extracting bpfcc in bpfcc-0.11.0
dpkg-source: info: unpacking bpfcc_0.11.0.orig.tar.gz
dpkg-source: info: unpacking bpfcc_0.11.0.orig-libbpf.tar.gz
dpkg-source: info: unpacking bpfcc_0.11.0-1.debian.tar.xz
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: info: applying fix-install-path.patch
I: Not using root during the build.

Debian XMPP Team: New Dino in Debian

14 October, 2019 - 05:00

Dino (dino-im in Debian), the modern and beautiful chat client for the desktop, has some nice, new features. Users of Debian testing (bullseye) might like to try them:

  • XEP-0391: Jingle Encrypted Transports (explained here)
  • XEP-0402: Bookmarks 2 (explained here)

Note, that users of Dino on Debian 10 (buster) should upgrade to version 0.0.git20181129-1+deb10u1, because of a number of security issues, that have been found (CVE-2019-16235, CVE-2019-16236, CVE-2019-16237).

There have been other XMPP related updates in Debian since release of buster, among them:

You might be interested in the Octobers XMPP newsletter, also available in German.

Utkarsh Gupta: Joining Debian LTS!

14 October, 2019 - 02:41

Hey,

(DPL Style):
TL;DR: I joined Debian LTS as a trainee in July (during DebConf) and finally as a paid contributor from this month onward! :D

Here’s something interesting that happened last weekend!
Back during the good days of DebConf19, I finally got a chance to meet Holger! As amazing and inspiring a person he is, it was an absolute pleasure meeting him and also, I got a chance to talk about Debian LTS in more detail.

I was introduced to Debian LTS by Abhijith during his talk in MiniDebConf Delhi. And since then, I’ve been kinda interested in that project!
But finally it was here that things got a little “official” and after a couple of mail exchanges with Holger and Raphael, I joined in as a trainee!

I had almost no idea what to do next, so the next month I stayed silent, observing the workflow as people kept committing and announcing updates.
And finally in September, I started triaging and fixing the CVEs for Jessie and Stretch (mostly the former).

Thanks to Abhijith who explained the basics of what DLA is and how do we go about fixing bugs and then announcing them.

With that, I could fix a couple of CVEs and thanks to Holger (again) for reviewing and sponsoring the uploads! :D

I mostly worked (as a trainee) on:

  • CVE-2019-10751, affecting httpie, and
  • CVE-2019-16680, affecting file-roller.

And finally this happened:
(Though there’s a little hiccup that happened there, but that’s something we can ignore!)

So finally, I’ll be working with the team from this month on!
As Holger says, very much yay! :D

Until next time.
:wq for today.

Iustin Pop: Actually fixing a bug

14 October, 2019 - 01:43

One of the outcomes of my recent (last few years) sports ramp-up is that my opensource work is almost entirely left aside. Having an office job makes it very hard to spend more time sitting at the computer at home too…

So even my travis dashboard was red for a while now, but I didn’t look into it until today. Since I didn’t change anything recently, just travis builds started to fail, I was sure it’s just environment changes that need to be taken into account.

And indeed it was so, for two out of three projects. The third one… I actually got to fix a bug, introduced at the beginning of the year, but for which gcc (same gcc that originally passed) started to trip on a while back. I even had to read the man page of snprintf! Was fun ☺, too bad I don’t have enough time to do this more often…

My travis dashboard is green again, and “test suite” (if you can call it that) is expanded to explicitly catch this specific problem in the future.

Shirish Agarwal: Social media, knowledge and some history of Banking

13 October, 2019 - 05:58

First of all Happy Dusshera to everybody. While Dusshera is India is a symbol of many things, it is a symbol of forgiveness and new beginnings. While I don’t know about new beginnings I do feel there is still lot of baggage which needs to be left I would try to share some insights I uncovered over last few months and few realizations I came across.

First of all thank you to the Debian-gnome-team to keep working at new version of packages. While there are still a bunch of bugs which need to be fixed especially #895990 and #913978 among others, still kudos for working at it. Hopefully, those bugs and others will be fixed soon so we could install gnome without a hiccup. I have not been on IRC because my riot-web has been broken for several days now. Also most of the IRC and telegram channels at least related to Debian become mostly echo chambers one way or the other as you do not get any serious opposition. On twitter, while it’s highly toxic, you also get the urge to fight the good fight when either due to principles or for some other reason (usually paid trolls) people fight, While I follow my own rules on twitter apart from their TOS, I feel at least new people who are going on social media in India or perhaps elsewhere as well could use are –

  1. It is difficult to remain neutral and stick to the facts. If you just stick to the facts, you will be branded as urban naxal or some such names.
  2. I find many times, if you are calm and don’t react, many a times, they are curious and display ignorance of knowledge which you thought everyone knew is not there. Now whether that is due to either due to lack of education, lack of knowledge or pretensions, although if its pretentious, you are caught sooner or later.
  3. Be civil at all times, if somebody harassess you, calls you names, report them and block them, although twitter still needs to fix the reporting thing a whole lot more. Although, when even somebody like me (bit of understanding of law, technology, language etc.) had a hard time figuring out twitter’s reporting ways, I dunno how many people would be able to use it successfully ? Maybe they make it so unhelpful so the traffic flows no matter what. I do realize they still haven’t figured out their business model but that’s a question for another day. In short, they need to make it far more simpler than it is today.
  4. You always have an option to block people but it has its own consequences.
  5. Be passive-aggressive if the situation demands it.
  6. Most importantly though, if somebody starts making jokes about you or start abusing you, it is sure that the person on the other side doesn’t have any more arguments and you have won.
Banking

Before I start, let me share why I am putting a blog post on the topic. The reason is pretty simple. It seems a huge number of Indians don’t either know the history of how banking started, the various turns it took and so on and so forth. In fact, nowadays history is being so hotly contested and perhaps even being re-written. Hence for some things I would be sharing some sources but even within them, there is possibiity of contestations. One of the contestations for a long time is when ancient coinage and the technique of smelting, flattening came to India. Depending on whom you ask, you have different answers. Lot of people are waiting to get more insight from the Keezhadi excavation which may also give some insight to the topic as well. There are rumors that the funding is being stopped but hope that isn’t true and we gain some more insight in Indian history. In fact, in South India, there seems to be lot of curiousity and attraction towards the site. It is possible that the next time I get a chance to see South India, I may try to see if there is a chance to see this unique location if a museum gets built somewhere nearby. Sorry from deviating from the topic, but it seems that ancient coinage started anywhere between 1st millenium BCE to 6th century BCE so it could be anywhere between 1500 – 2000 years old in India. While we can’t say anything for sure, but it’s possible that there was barter before that. There has also been some history about sharing tokens in different parts of the world as well. The various timelines get all jumbled up hence I would suggest people to use the wikipedia page of History of Money as a starting point. While it may not be give a complete, it would probably broaden the understanding a little bit. One of the reasons why history is so hotly contested could also perhaps lie because of the destruction of the Ancient Library of Alexandria. Who knows what more we would have known of our ancients if it was not destroyed

Hundi (16th Centry)

I am jumping to 16th century as it is more closer to today’s modern banking otherwise the blog post would be too long. Now Hundi was a financial instrument which was used from 16th century onwards. This could be as either a forbearer of a cheque or a Traveller’s cheque. There doesn’t seem to be much in way of information, whether this was introduced by the Britishers or before by the Portugese when they came to India in via when the Portugese came when they came to India or was it in prevelance before. There is a fascinating in-depth study of Hundi though between 1858-1978 done by Marina Bernadette for London School of Economics as her dissertion paper.

Banias and Sarafs

As I had shared before, history in India is intertwined with mythology and history. While there is possibility of a lot of history behind this is documented somewhere I haven’t been able to find it. As I come from Bania , I had learnt lot of stories about both the migratory strain that Banias had as well as how banias used to split their children in adjoining states. Before the Britishers ruled over India, popular history tells us that it was Mughal emprire that ruled over us. What it doesn’t tell us though that both during the Mughal empire as well as Britishers, Banias and Sarafs who were moneylenders and bullion traders respectively hedged their bets. More so, if they were in royal service or bound to be close to the power of administration of the state/mini-kingdom/s . What they used to do is make sure that one of the sons would obey the king here while the other son may serve the muslim ruler. The idea behind that irrespective of whoever wins, the banias or sarafs would be able to continue their traditions and it was very much possible that the other brother would not be killed or even if he was, any or all wealth will pass to the victorious brother and the family name will live on. If I were to look at that, I’m sure I’ll find the same not only in Banias and Sarafs but perhaps other castes and communities as well. Modern history also tells of Rothschilds who did and continue to be an influence on the world today.

As to why did I share about how people acted in their self-interest because nowadays in the Indian social media, it is because many people chose to believe a very simplistic black and white narrative and they are being fed that by today’s dominant political party in power. What I’m trying to simply say is that history is much more complex than that. While you may choose to believe either of the beliefs, it might open a window in at least some Indian’s minds that there is possibility of another way things were done and ways in which people acted then what is being perceived today. It is also possible it may be contested today as lot of people would like to appear in the ‘right side’ of history as it seems today.

Banking in British Raj till nationalization

When the Britishers came, they bought the modern Banking system with them. These lead to creation of various banks like Bank of Bengal, Bank of Bombay and Bank of Madras which was later subsumed under Imperial Bank of India which later became State Bank of India in 1955. While I will not go into details, I do have curiousity so if life has, would surely want to visit either the Banca Monte dei Paschi di Siena S.p.A of Italy or the Berenberg Bank both of which probably has lot of history than what is written on their wikipedi pages. Soon, other banks. Soon there was whole clutch of banks which will continue to facilitate the British till independance and leak money overseas even afterwards till the Banks were nationalized in 1956 due to the ‘Gorwala Committee’ which recommended. Apart from the opaqueness of private banking and leakages, there was non provision of loans to priority sector i.e. farming in India, A.D. Gorawala recommended nationalization to weed out both issues in a single solution. One could debate efficacy of the same, but history has shown us that privatization in financial sector has many a times been costly to depositors. The financial Crisis of 2008 and the aftermath in many of the financial markets, more so private banks is a testament to it. Even the documentary Plenary’s Men gives whole lot of insight in the corruption that Private banks do today.

The plenary’s men on Youtube at least to my mind is evidence enough that at least India should be cautious in dealings with private banks.

Co-operative banks and their Rise

The Co-operative banks rise in India was largely in part due to rise of co-operative societies. While the co-operative Societies Act was started in 1904 itself. While there were quite a few co-operative societies and banks, arguably the real filip to Co-operative Banking was done by Amul when it started in 1946 and the milk credit society it started with it. I dunno how many people saw ‘Manthan‘ which chronicled the story and bought the story of both the co-operative societies and co-operative banks to millions of India. It is a classic movie which lot of today’s youth probably doesn’t know and even if he would would take time to identify with, although people of my generation the earlier generations do get it. One of the things that many people don’t get is that for lot of people even today, especially for marginal farmers and such in rural areas, co-operative banks are still the only solution. While in recent times, the Govt. of the day has tried something called Jan Dhan Yojana it hasn’t been as much a success as they were hoping to. While reams of paper have been written about it, like most policies it didn’t deliver to the last person which such inclusion programs try. Issues from design to implementation are many but perhaps some other time. I am sharing about Co-operative banks as a recent scam took place in one of the banks, probably one of the most respected and widely held co-operative banks. I would rather share sucheta dalal’s excellent analysis done on the PMC bank crisis which is 1unfolding and perhaps continue to unfold in days to come.

Conclusion

At the end I have to admit I took a lot of short-cuts to reach till here. There is possibility that there may be details people might want me to incorporate, if so please let me know and would try and add that. I did try to compress as much as possible while trying to be as exhaustive as possible. I also haven’t used any data as I wanted to keep the explanations as simple as possible and try not to have as little of politics as possible even though biases which are there, are there.

Dirk Eddelbuettel: GitHub Streak: Round Six

12 October, 2019 - 22:53

Five ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking

github activity october 2013 to october 2014

And four year ago a first follow-up appeared in this post:

github activity october 2014 to october 2015

And three years ago we had a followup

github activity october 2015 to october 2016

And two years ago we had another one

github activity october 2016 to october 2017

And last year another one

github activity october 2017 to october 2018

As today is October 12, here is the newest one from 2018 to 2019:

github activity october 2018 to october 2019

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Louis-Philippe Véronneau: Alpine MusicSafe Classic Hearing Protection Review

12 October, 2019 - 11:00

Yesterday, I went to a punk rock show and had tons of fun. One of the bands playing (Jeunesse Apatride) hadn't played in 5 years and the crowd was wild. The other bands playing were also great. Here's a few links if you enjoy Oi! and Ska:

Sadly, those kind of concerts are always waaaaayyyyy too loud. I mostly go to small venue concerts and for some reason the sound technicians think it's a good idea to make everyone's ears bleed. You really don't need to amplify the drums when the whole concert venue is 50m²...

So I bough hearing protection. It was the first time I wore earplugs at a concert and it was great! I can't really compare the model I got (Alpine MusicSafe Classic earplugs) to other brands since it's the only one I tried out, but:

  • They were very comfortable. I wore them for about 5 hours and didn't feel any discomfort.

  • They came with two sets of plastic tips you insert in the silicone earbuds. I tried the -17db ones but I decided to go with the -18db inserts as it was still freaking loud.

  • They fitted very well in my ears even tough I was in the roughest mosh pit I've ever experienced (and I've seen quite a few). I was sweating profusely from all the heavy moshing and never once I feared loosing them.

  • My ears weren't ringing when I came back home so I guess they work.

  • The earplugs didn't distort sound, only reduce the volume.

  • They came with a handy aluminium carrying case that's really durable. You can put it on your keychain and carry them around safely.

  • They only cost me ~25 CAD with taxes.

The only thing I disliked was that I found it pretty much impossible to sing while wearing them. as I couldn't really hear myself. With a bit of practice, I was able to sing true but it wasn't great :(

All in all, I'm really happy with my purchase and I don't think I'll ever go to another concert without earplugs.

Molly de Blanc: Conferences

12 October, 2019 - 06:23

I think there are too many conferences.

Are there too many FLOSS conferences?

— Molly dBoo (@mmillions) October 7, 2019

I conducted this very scientific Twitter poll and out of 52 respondants, only 23% agreed with me. Some people who disagreed with me pointed out specifically what they think is lacking:  more regional events, more in specific countries, and more “generic” FLOSS events.

Many projects have a conference, and then there are “generic” conferences, like FOSDEM, LibrePlanet, LinuxConfAU, and FOSSAsia. Some are more corporate (OSCON), while others more community focused (e.g. SeaGL).

There are just a lot of conferences.

I average a conference a month, with most of them being more general sorts of events, and a few being project specific, like DebConf and GUADEC.

So far in 2019, I went to: FOSDEM, CopyLeft Conf, LibrePlanet, FOSS North, Linux Fest Northwest, OSCON, FrOSCon, GUADEC, and GitLab Commit. I’m going to All Things Open next week. In November I have COSCon scheduled. I’m skipping SeaGL this year. I am not planning on attending 36C3 unless my talk is accepted. I canceled my trip to DebConf19. I did not go to Camp this year. I also had a board meeting in NY, an upcoming one in Berlin, and a Debian meeting in the other Cambridge. I’m skipping LAS and likely going to SFSCon for GNOME.

So 9 so far this year,  and somewhere between 1-4 more, depending on some details.

There are also conferences that don’t happen every year, like HOPE and CubaConf. There are some that I haven’t been to yet, like PyCon, and more regional events like Ohio Linux Fest, SCALE, and FOSSCon in Philadelphia.

I think I travel too much, and plenty of people travel more than I do. This is one of the reasons why we have too many events: the same people are traveling so much.

When you’re nose deep in it, when you think that you’re doing is important, you keep going to them as long as you’re invited. I really believe in the messages I share during my talks, and I know by speaking I am reaching audiences I wouldn’t otherwise. As long as I keep getting invited places, I’ll probably keep going.

Finding sponsors is hard(er).

It is becoming increasingly difficult to find sponsors for conferences. This is my experience, and what I’ve heard from speaking with others about it. Lower response rates to requests and people choosing lower sponsorship levels than they have in past years.

CFP responses are not increasing.

I’m yet to hear of any established community-run tech conferences who’ve had growth in their CFP response rate this year.

Peak conference?

— Christopher Neugebauer (@chrisjrn) October 3, 2019

I sort of think the Tweet says it all. Some conferences aren’t having this experiences. Ones I’ve been involved with, or spoken to the organizers of, are needing to extend their deadlines and generally having lower response rates.

Do I think we need fewer conferences?

Yes and no. I think smaller, regional conferences are really important to reaching communities and individuals who don’t have the resources to travel. I think it gives new speakers opportunities to share what they have to say, which is important for the growth and robustness of FOSS.

Project specific conferences are useful for those projects. It gives us a time to have meetings and sprints, to work and plan, and to learn specifically about our project and feel more connected to out collaborators.

On the other hand, I do think we have more conferences than even a global movement can actively support in terms of speakers, organizer energy, and sponsorship dollars.

What do I think we can do?

Not all of these are great ideas, and not all of them would work for every event. However, I think some combination of them might make a difference for the ecosystem of conferences.

More single-track or two-track conferences. All Things Open has 27 sessions occurring concurrently. Twenty-seven! It’s a huge event that caters to many people, but seriously, that’s too much going on at once. More 3-4 track conferences should consider dropping to 1-2 tracks, and conferences with more should consider dropping their numbers down as well. This means fewer speakers at a time.

Stop trying to grow your conference. Growth feels like a sign of success, but it’s not. It’s a sign of getting more people to show up. It helps you make arguments to sponsors, because more attendees means more people being reached when a company sponsors.

Decrease sponsorship levels. I’ve seen conferences increasing their sponsorship levels. I think we should all agree to decrease those numbers. While we’ll all have to try harder to get more sponsors, companies will be able to sponsor more events.

Stop serving meals. I appreciate a free meal. It makes it easier to attend events, but meals are expensive and difficult to logisticate. I know meals make it easier for some people, especially students, to attend. Consider offering special RSVP lunches for students, recent grads, and people looking for work.

Ditch the fancy parties. Okay, I also love a good conference party. They’re loads of fun and can be quite expensive. They also encourage drinking, which I think is bad for the culture.

Ditch the speaker dinners. Okay, I also love a good speaker dinner. It’s fun to relax, see my friends, and have a nice meal that isn’t too loud of overwhelming. These are really expensive. I’ve been trying to donate to local food banks/food insecurity charities an equal amount of the cost of dinner per person, but people are rarely willing to share that information! Paying for a nice dinner out of pocket — with multiple bottles of wine — usually runs $50-80 with tip. I know one dinner I went to was $150 a person. I think the community would be better served if we spent that money on travel grants. If you want to be nice to speakers, I enjoy a box of chocolates I can take home and share with my roommates.

 Give preference to local speakers. One of the things conferences do is bring in speakers from around the world to share their ideas with your community, or with an equally global community. This is cool. By giving preference to local speakers, you’re building expertise in your geography.

Consider combining your community conferences. Rather than having many conferences for smaller communities, consider co-locating conferences and sharing resources (and attendees). This requires even more coordination to organize, but could work out well.

Volunteer for your favorite non-profit or project. A lot of us have booths at conferences, and send people around the world in order to let you know about the work we’re doing. Consider volunteering to staff a booth, so that your favorite non-profits and projects have to send fewer people.

While most of those things are not “have fewer conferences,” I think they would help solve the problems conference saturation is causing: it’s expensive for sponsors, it’s expensive for speakers, it creates a large carbon footprint, and it increases burnout among organizers and speakers.

I must enjoy traveling because I do it so much. I enjoy talking about FOSS, user rights, and digital rights. I like meeting people and sharing with them and learning from them. I think what I have to say is important. At the same time, I think I am part of an unhealthy culture in FOSS, that encourages burnout, excessive travel, and unnecessary spending of money, that could be used for better things.

One last thing you can do, to help me, is submit talks to your local conference(s). This will help with some of these problems as well, can be a great experience, and is good for your conference and your community!

Markus Koschany: My Free Software Activities in September 2019

11 October, 2019 - 03:49

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • Reiner Herrmann investigated a build failure of supertuxkart on several architectures and prepared an update to link against libatomic. I reviewed and sponsored the new revision which allowed supertuxkart 1.0 to migrate to testing.
  • Python 3 ports: Reiner also ported bouncy, a game for small kids, to Python3 which I reviewed and uploaded to unstable.
  • Myself upgraded atomix to version 3.34.0 as requested although it is unlikely that you will find a major difference to the previous version.
Debian Java Misc
  • I packaged new upstream releases of ublock-origin and privacybadger, two popular Firefox/Chromium addons and
  • packaged a new upstream release of wabt, the WebAssembly Binary Toolkit.
Debian LTS

This was my 43. month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 11.09.2019 until 15.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in libonig, bird, curl, openssl, wpa, httpie, asterisk, wireshark and libsixel.
  • DLA-1922-1. Issued a security update for wpa fixing 1 CVE.
  • DLA-1932-1. Issued a security update for openssl fixing 2 CVE.
  • DLA-1900-2. Issued a regression update for apache fixing 1 CVE.
  • DLA-1943-1. Issued a security update for jackson-databind fixing 4 CVE.
  • DLA-1954-1. Issued a security update for lucene-solr fixing 1 CVE. I triaged CVE-2019-12401 and marked Jessie as not-affected because we use the system libraries of woodstox in Debian.
  • DLA-1955-1. Issued a security update for tcpdump fixing 24 CVE by backporting the latest upstream release to Jessie. I discovered several test failures but after more investigation I came to the conclusion that the test cases were simply created with a newer version of libpcap which causes the test failures with Jessie’s older version. DLA-1955-1 will be available shortly.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my sixteenth month and I have been assigned to work 15 hours on ELTS plus five hours from August. I used 15 of them for the following:

  • I was in charge of our ELTS frontdesk from 30.09.2019 until 06.10.2019 and I triaged CVE in tcpdump. There were no reports of other security vulnerabilities for supported packages in this week.
  • ELA-163-1. Issued a security update for curl fixing 1 CVE.
  • ELA-171-1. Issued a security update for openssl fixing 2 CVE.
  • ELA-172-1. Issued a security update for linux fixing 23 CVE.
  • ELA-174-1. Issued a security update for tcpdump fixing 24 CVE. ELA-174-1 will be available shortly.

Norbert Preining: R with TensorFlow 2.0 on Debian/sid

10 October, 2019 - 13:15

I recently posted on getting TensorFlow 2.0 with GPU support running on Debian/sid. At that time I didn’t manage to get the tensorflow package for R running properly. It didn’t need much to get it running, though.

The biggest problem I faced was that the R/TensorFlow package recommends using install_tensorflow, which can use either auto, conda, virtualenv, or system (at least according to the linked web page). I didn’t want to set up neither a conda nor virtualenv environment, since TensorFlow was already installed, so I thought system would be correct, but then, I had it already installed. Anyway, the system option is gone and not accepted, but I still got errors. In particular because the code mentioned on the installation page is incorrect for TF2.0!

It turned out to be a simple error on my side – the default is to use the program python which in Debian is still Python2, while I have TF only installed for Python3. The magic incantation to fix that is use_python("/usr/bin/python3") and one is set.

So here is a full list of commands to get R/TensorFlow running on top of an already installed TensorFlow for Python3 (as usual either as root to be installed into /usr/local or as user to have a local installation):

devtools::install_github("rstudio/tensorflow")

And if you want to run some TF program:

library(tensorflow)
use_python("/usr/bin/python3")
tf$math$cumprod(1:5)

This gives lots of output but mentioning that it is running on my GPU.

At least for the (probably very short) time being this looks like a workable system. Now off to convert my TF1.N code to TF2.0.

Louis-Philippe Véronneau: Trying out Sourcehut

10 October, 2019 - 11:00

Last month, I decided it was finally time to move a project I maintain from Github1 to another git hosting platform.

While polling other contributors (I proposed moving to gitlab.com), someone suggested moving to Sourcehut, a newish git hosting platform written and maintained by Drew DeVault. I've been following Drew's work for a while now and although I had read a few blog posts on Sourcehut's development, I had never really considered giving it a try. So I did!

Sourcehut is still in alpha and I'm expecting a lot of things to change in the future, but here's my quick review.

Things I like Sustainable FOSS

Sourcehut is 100% Free Software. Github is proprietary and I dislike Gitlab's Open Core business model.

Sourcehut's business model also seems sustainable to me, as it relies on people paying a monthly fee for the service. You'll need to pay if you want your code hosted on https://sr.ht once Sourcehut moves into beta. As I've written previously, I like that a lot.

In comparison, Gitlab is mainly funded by venture capital and I'm afraid of the long term repercussions this choice will have.

Continuous Integration

Continuous Integration is very important to me and I'm happy to say Sourcehut's CI is pretty good! Like Travis and Gitlab CI, you declare what needs to happen in a YAML file. The CI uses real virtual machines backed by QEMU, so you can run many different distros and CPU archs!

Even nicer, you can actually SSH into a failed CI job to debug things. In comparison, Gitlab CI's Interactive Web Terminal is ... web based and thus not as nice. Worse, it seems it's still somewhat buggy as Gitlab still hasn't enabled it on their gitlab.com instance.

Here's what the instructions to SSH into the CI look like when a job fails:

This build job failed. You may log into the failed build environment within 10
minutes to examine the results with the following command:

ssh -t builds@foo.bar connect NUMBER

Sourcehut's CI is not as feature-rich or as flexible as Gitlab CI, but I feel it is more powerful then Gitlab CI's default docker executor. Folks that run integration tests or more complicated setups where Docker fails should definitely give it a try.

From the few tests I did, Sourcehut's CI is also pretty quick (it's definitely faster than Travis or Gitlab CI).

No JS

Although Sourcehut's web interface does bundle some Javascript, all features work without it. Three cheers for that!

Things I dislike Features division

I'm not sure I like the way features (the issue tracker, the CI builds, the git repository, the wikis, etc.) are subdivided in different subdomains.

For example, when you create a git repository on git.sr.ht, you only get a git repository. If you want an issue tracker for that git repository, you have to create one at todo.sr.ht with the same name. That issue tracker isn't visible from the git repository web interface.

That's the same for all the features. For example, you don't see the build status of a merged commit when you look at it. This design choice makes you feel like the different features aren't integrated to one another.

In comparison, Gitlab and Github use a more "centralised" approach: everything is centered around a central interface (your git repository) and it feels more natural to me.

Discoverability

I haven't seen a way to search sr.ht for things hosted there. That makes it hard to find repositories, issues or even the Sourcehut source code!

Merge Request workflow

I'm a sucker for the Merge Request workflow. I really like to have a big green button I can click on to merge things. I know some people prefer a more manual workflow that uses git merge and stuff, but I find that tiresome.

Sourcehut chose a workflow based on sending patches by email. It's neat since you can submit code without having an account. Sourcehut also provides mailing lists for projects, so people can send patches to a central place.

I find that workflow harder to work with, since to me it makes it more difficult to see what patches have been submitted. It also makes the review process more tedious, since the CI isn't ran automatically on email patches.

Summary

All in all, I don't think I'll be moving ISBG to Sourcehut (yet?). At the moment it doesn't quite feel as ready as I'd want it to be, and that's OK. Most of the things I disliked about the service can be fixed by some UI work and I'm sure people are already working on it.

Github was bought by MS for 7.5 billion USD and Gitlab is currently valued at 2.7 billion USD. It's not really fair to ask Sourcehut to fully compete just yet :)

With Sourcehut, Drew DeVault is fighting the good fight and I wish him the most resounding success. Who knows, maybe I'll really migrate to it in a few years!

  1. Github is a proprietary service, has been bought by Microsoft and gosh darn do I hate Travis CI. 

Dirk Eddelbuettel: RcppArmadillo 0.9.800.1.0

10 October, 2019 - 07:59

Another month, another Armadillo upstream release! Hence a new RcppArmadillo release arrived on CRAN earlier today, and was just shipped to Debian as well. It brings a faster solve() method and other goodies. We also switched to the (awesome) tinytest unit test frameowrk, and Min Kim made the configure.ac script more portable for the benefit of NetBSD and other non-bash users; see below for more details. One again we ran two full sets of reverse-depends checks, no issues were found, and the packages was auto-admitted similarly at CRAN after less than two hours despite there being 665 reverse depends. Impressive stuff, so a big Thank You! as always to the CRAN team.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 665 other packages on CRAN.

Changes in RcppArmadillo version 0.9.800.1.0 (2019-10-09)
  • Upgraded to Armadillo release 9.800 (Horizon Scraper)

    • faster solve() in default operation; iterative refinement is no longer applied by default; use solve_opts::refine to explicitly enable refinement

    • faster expmat()

    • faster handling of triangular matrices by rcond()

    • added .front() and .back()

    • added .is_trimatu() and .is_trimatl()

    • added .is_diagmat()

  • The package now uses tinytest for unit tests (Dirk in #269).

  • The configure.ac script is now more careful about shell portability (Min Kim in #270).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Adnan Hodzic: Hello world!

9 October, 2019 - 22:55

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Enrico Zini: Fixed XSS issue on debtags.debian.org

9 October, 2019 - 15:51

Thanks to Moritz Naumann who found the issues and wrote a very useful report, I fixed a number of Cross Site Scripting vulnerabilities on https://debtags.debian.org.

The core of the issue was code like this in a Django view:

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        return http.HttpResponseNotFound("Package %s was not found" % name)
    # …

The default content-type of HttpResponseNotFound is text/html, and the string passed is the raw HTML with clearly no escaping, so this allows injection of arbitrary HTML/<script> code in the name variable.

I was so used to Django doing proper auto-escaping that I missed this place in which it can't do that.

There are various things that can be improved in that code.

One could introduce escaping (and while one's at it, migrate the old % to format):

from django.utils.html import escape

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        return http.HttpResponseNotFound("Package {} was not found".format(escape(name)))
    # …

Alternatively, set content_type to text/plain:

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        return http.HttpResponseNotFound("Package {} was not found".format(name), content_type="text/plain")
    # …

Even better, raise Http404:

from django.utils.html import escape

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        raise Http404(f"Package {name} was not found")
    # …

Even better, use standard shortcuts and model functions if possible:

from django.shortcuts import get_object_or_404

def pkginfo_view(request, name):
    pkg = get_object_or_404(bmodels.Package, name=name)
    # …

And finally, though not security related, it's about time to switch to class-based views:

class PkgInfo(TemplateView):
    template_name = "reports/package.html"

    def get_context_data(self, **kw):
        ctx = super().get_context_data(**kw)
        ctx["pkg"] = get_object_or_404(bmodels.Package, name=self.kwargs["name"])
    # …
        return ctx

I proceeded with a review of the other Django sites I maintain in case I reproduced this mistake also there.

Chris Lamb: Tour d'Orwell: Southwold

9 October, 2019 - 07:29

I recently read that that during 1929 George Orwell returned to his family home in the Suffolk town of Southwold but when I further learned that he had acquired a motorbike during this time to explore the surrounding villages I could not resist visiting myself on such a transport mode.

Orwell would end up writing his first novel here ("Burmese Days") followed by his first passable one ("A Clergyman's Daughter") but unfortunately the local bookshop was only to have the former in stock. He moved back to London in 1934 to work in a bookshop in Hampstead, now a «Le Pain Quotidien».

If you are thinking of visiting, Southwold has some lovely quaint beach huts, a brewery and the officially signposted A1120 "Scenic Route" I took on the way out was neither as picturesque nor as fun to ride as the A1066

Antoine Beaupré: Tip of the day: batch PDF conversion with LibreOffice

8 October, 2019 - 23:28

Someone asked me today why they couldn't write on the DOCX document they received from a student using the pen in their Onyx Note Pro reader. The answer, of course, is that while the Onyx can read those files, it can't annotate them: that only works with PDFs.

Next question then, is of course: do I really need to open each file separately and save them as PDF? That's going to take forever, I have 30 students per class!

Fear not, shell scripting and headless mode flies in to the rescue!

As it turns out, one of the Libreoffice parameters allow you to run batch operations on files. By calling:

libreoffice --headless --convert-to pdf *.docx

LibreOffice will happily convert all the *.docx files in the current directory to PDF. But because navigating the commandline can be hard, I figured I could push this a tiny little bit further and wrote the following script:

#!/bin/sh

exec libreoffice --headless --convert-to pdf "$@"

Drop this in ~/.local/share/nautilus/scripts/libreoffice-pdf, mark it executable, and voilà! You can batch-convert basically any text file (or anything supported by LibreOffice, really) into PDF.

Now I wonder if this would be a useful addition to the Debian package, anyone?

Steve Kemp: A blog overhaul

8 October, 2019 - 22:00

When this post becomes public I'll have successfully redeployed my blog!

My blog originally started in 2005 as a Wordpress installation, at some point I used Mephisto, and then I wrote my own solution.

My project was pretty cool; I'd parse a directory of text-files, one file for each post, and insert them into an SQLite database. From there I'd initiate a series of plugins, each one to generate something specific:

  • One plugin would output an archive page.
  • Another would generate a tag cloud.
  • Yet another would generate the actual search-results for a particular month/year, or tag-name.

All in all the solution was flexible and it wasn't too slow because finding posts via the SQLite database was pretty good.

Anyway I've come to realize that freedom and architecture was overkill. I don't need to do fancy presentation, I don't need a loosely-coupled set of plugins.

So now I have a simpler solution which uses my existing template, uses my existing posts - with only a few cleanups - and generates the site from scratch, including all the comments, in less than 2 seconds.

After running make clean a complete rebuild via make upload (which deploys the generated site to the remote host via rsync) takes 6 seconds.

I've lost the ability to be flexible in some areas, but I've gained all the speed. The old project took somewhere between 20-60 seconds to build, depending on what had changed.

In terms of simplifying my life I've dropped the remote installation of a site-search which means I can now host this site on a static site with only a single handler to receive any post-comments. (I was 50/50 on keeping comments. I didn't want to lose those I'd already received, and I do often find valuable and interesting contributions from readers, but being 100% static had its appeal too. I guess they stay for the next few years!)

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้