Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 13 min ago

Jonathan Carter: Calamares Plans for Debian 11

17 October, 2019 - 16:01

Brief history of Calamares in Debian

Before Debian 9 was released, I was preparing a release for a derivative of Debian that was a bit different than other Debian systems I’ve prepared for redistribution before. This was targeted at end-users, some of whom might have used Ubuntu before, but otherwise had no Debian related experience. I needed to find a way to make Debian really easy for them to install. Several options were explored, and I found that Calamares did a great job of making it easy for typical users to get up and running fast.

After Debian 9 was released, I learned that other Debian derivatives were also using Calamares or planning to do so. It started to make sense to package Calamares in Debian so that we don’t do duplicate work in all these projects. On its own, Calamares isn’t very useful, if you ran the pure upstream version in Debian it would crash before it starts to install anything. This is because Calamares needs some configuration and helpers depending on the distribution. Most notably in Debian’s case, this means setting the location of the squashfs image we want to copy over, and some scripts to either install grub-pc or grub-efi depending on how we boot. Since I already did most of the work to figure all of this out, I created a package called calamares-settings-debian, which contains enough configuration to install Debian using Calamares so that derivatives can easily copy and adapt it to their own calamares-settings-* packages for use in their systems.

In Debian 9, the live images were released without an installer available in the live session. Unfortunately the debian-installer live session that was used in previous releases had become hard to maintain and had a growing number of bugs that didn’t make it suitable for release, so Steve from the live team suggested that we add Calamares to the Debian 10 test builds and give it a shot, which surprised me because I never thought that Calamares would actually ship on official Debian media. We tried it out, and it worked well so Debian 10 live media was released with it. It turned out great, every review of Debian 10 I’ve seen so far had very good things to say about it, and the very few problems people have found have already been fixed upstream (I plan to backport those fixes to buster soon).

Plans for Debian 11 (bullseye)

New slideshow

If I had to choose a biggest regret regarding the Debian 10 release, this slideshow would probably be it. It’s just the one slide depicted above. The time needed to create a nice slideshow was one constraint, but another was that I also didn’t have enough time to figure out how its translations work and do a proper call for translations in time for the hard freeze. I consider the slideshow a golden opportunity to explain to new users what the Debian project is about and what this new Debian system they’re installing is capable of, so this is basically a huge missed opportunity that I don’t want to repeat again.

I intend to pull in some help from the web team, publicity team and anyone else who might be interested to cover slides along the topics of (just a quick braindump, final slides will likely have significantly different content):

  • The Debian project, and what it’s about
    • Who develops Debian and why
    • Cover the social contract, and perhaps touch on the DFSG
  • Who uses Debian? Mention notable users and use cases
    • Big and small commercial companies
    • Educational institutions
    • …even NASA?
  • What Debian can do
    • Explain vast package library
    • Provide some tips and tricks on what to do next once the system is installed
  • Where to get help
    • Where to get documentation
    • Where to get additional help

It shouldn’t get to heavy and shouldn’t run longer than a maximum of three minutes or so, because in some cases that might be all we have during this stage of the installation.

Try out RAID support

Calamares now has RAID support. It’s still very new and as far as I know it’s not yet widely tested. It needs to be enabled as a compile-time option and depends on kpmcore 4.0.0, which Calamares uses for its partitioning backend. kpmcore 4.0.0 just entered unstable this week, so I plan to do an upload to test this soon.

RAID support is one of the biggest features missing from Calamares, and enabling it would make it a lot more useful for typical office environments where RAID 1 is typically used on worktations. Some consider RAID on desktops somewhat less important than it used to be. With fast SSDs and network provisioning with gigabit ethernet, it’s really quick to recover from a failed disk, but you still have downtime until the person responsible pops over to replace that disk. At least with RAID-1 you can avoid or drastically decrease downtime, which makes the cost of that extra disk completely worth while.

Add Debian-specific options

The intent is to keep the installer simple, so adding new options is a tricky business, but it would be nice to cover some Debian-specific options in the installer just like debian-installer does. At this point I’m considering adding:

  • Choosing a mirror. Currently it just default to writing a sources.list file that uses deb.debian.org, which is usually just fine.
  • Add an option to enable popularity contest (popcon). As a Debian developer I find popcon stats quite useful. Even though just a small percentage of people enable it, it provides enough data to help us understand how widely packages are used, especially in relation to other packages, and I’m slightly concerned that desktop users who now use Calamares instead of d-i who forget to enable popcon after installation, will result in skewing popcon results for desktop packages compared to previous releases.

Skip files that we’re deleting anyway

At DebConf19, I gave a lightning talk titled “Is it possible to install Debian in a lightning talk slot?”. The answer was sadly “No.”. The idea is that you should be able to install a full Debian desktop system within 5 minutes. In my preparations for the talk, I got it down to just under 6 minutes. It ended up taking just under 7 minutes during my lightnight talk, probably because I forgot to plug in my laptop into a power source and somehow got throttled to save power. Under 7 minutes is fast, but the exercise got me looking at what wasted the most time during installation.

Of the avoidable things that happen during installation, the thing that takes up the most time by a large margin is removing packages that we don’t want on the installed system. During installation, the whole live system is copied from the installation media over to the hard disk, and then the live packages (including Calamares) is removed from that installation. APT (or actually more speficically dpkg) is notorious for playing it safe with filesystem operations, so removing all these live packages takes quite some time (more than even copying them there in the first place).

The contents of the squashfs image is copied over to the filesystem using rsync, so it is possible to provide an exclude list of files that we don’t want. I filed a bug in Calamares to add support for such an exclude list, which was added in version 3.2.15 that was released this week. Now we also need to add support in the live image build scripts to generate these file lists based on the packages we want to remove, but that’s part of a different long blog post all together.

This feature also opens the door for a minimal mode option, where you could choose to skip non-essential packages such as LibreOffice and GIMP. In reality these packages will still be removed using APT in the destination filesystem, but it will be significantly faster since APT won’t have to remove any real files. The Ubuntu installer (Ubiquity) has done something similar for a few releases now.

Add a framebuffer session

As is the case with most Qt5 applications, Calamares can run directly on the Linux framebuffer without the need for Xorg or Wayland. To try it out, all you need to do is run “sudo calamares -platform linuxfb” on a live console and you’ll get Calamares right there in your framebuffer. It’s not tested upstream so it looks a bit rough. As far as I know I’m the only person so far to have installed a system using Calamares on the framebuffer.

The plan is to create a systemd unit to launch this at startup if ‘calamares’ is passed as a boot parameter. This way, derivatives who want this who uses a calamares-settings-debian (or their own fork) can just create a boot menu entry to activate the framebuffer installation without any additional work. I don’t think it should be too hard to make it look decent in this mode either,

Calamares on the framebuffer might also be useful for people who ship headless appliances based on Debian but who still need a simple installer.

Document calamares-settings-debian for derivatives

As the person who put together most of calamares-settings-debian, I consider it quite easy to understand and adapt calamares-settings-debian for other distributions. But even so, it takes a lot more time for someone who wants to adapt it for a derivative to delve into it than it would to just read some quick documentation on it first.

I plan to document calamares-settings-debian on the Debian wiki that covers everything that it does and how to adapt it for derivatives.

Improve Scripts

When writing helper scripts for Calamares in Debian I focused on getting it working, reliably and in time for the hard freeze. I cringed when looking at some of these again after the buster release, it’s not entirely horrible but it can use better conventions and be easier to follow, so I want to get it right for Bullseye. Some scripts might even be eliminated if we can build better images. For example, we install either grub-efi or grub-pc from the package pool on the installation media based on the boot method used, because in the past you couldn’t have both installed at the same time so they were just shipped as additional available packages. With changes in the GRUB packaging (for a few releases now already) it’s possible to have grub-efi and grub-pc-bin installed at the same time, so if we install both at build time it may be possible to simplify those pieces (and also save another few precious seconds of install time).

Feature Requests

I’m sure some people reading this will have more ideas, but I’m not in a position to accept feature requests right now, Calamares is one of a whole bunch of areas in Debian I’m working on in this release. If you have ideas or feature requests, rather consider filing them in Calamares’ upstream bug tracker on GitHub or get involved in the efforts. Calamares has an IRC channel on freenode (#calamares), and for Debian specific stuff you can join the Debian live channel on oftc (#debian-live).

Louis-Philippe Véronneau: Montreal Subway Foot Traffic Data

17 October, 2019 - 10:00

Two weeks ago, I took the Montreal subway with my SO and casually mentioned it would be nice to understand why the Joliette subway station has more foot traffic than the next one, Pie-IX. Is the part of the neighborhood served by the Joliette station denser? Would there be a correlation between the mean household income and foot traffic? Has the more aggressive gentrification around the Joliette station affected its achalandage?

Much to my surprise, instead of sharing my urbanistical enthusiasm, my SO readily disputed what I thought was an irrefutable fact: "Pie-IX has more foot traffic than Joliette, mainly because of the superior amount of bus lines departing from it" she told me.

Shaken to the core, I decided to prove to the world I was right and asked Société de Transport de Montréal (STM) for the foot traffic data of the Montreal subway.

Turns out I was wrong (Pie-IX is about twice as big as Joliette...) and individual observations are often untrue. Shocking right?!

Visualisations

STM kindly sent me daily values for each subway stations from 2001 to 2018. Armed with all this data, I decided to play a little with R and came up with some interesting graphs.

Behold this semi-interactive map of Montreal's subway! By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.

I also made per-line graphs that include data from multiple stations. Some of them (like the Orange line) are quite crowded though:

Licences
  • The subway map displayed on this page, the original dataset and my modified dataset are licenced under CCO 1.0: they are in the public domain.

  • The R code I wrote is licensed under the GPLv3+. Feel free to reuse it if you get a more up to date dataset from the STM.

Antoine Beaupré: Theory: average bus factor = 1

17 October, 2019 - 02:21

Two articles recently made me realize that all my free software projects basically have a bus factor of one. I am the sole maintainer of every piece of software I have ever written that I still maintain. There are projects that I have been the maintainer of which have other maintainers now (most notably AlternC, Aegir and Linkchecker), but I am not the original author of any of those projects.

Now that I have a full time job, I feel the pain. Projects like Gameclock, Monkeysign, Stressant, and (to a lesser extent) Wallabako all need urgent work: the first three need to be ported to Python 3, the first two to GTK 3, and the latter will probably die because I am getting a new e-reader. (For the record, more recent projects like undertime and feed2exec are doing okay, mostly because they were written in Python 3 from the start, and the latter has extensive unit tests. But they do suffer from the occasional bitrot (the latter in particular) and need constant upkeep.)

Now that I barely have time to keep up with just the upkeep, I can't help but think all of my projects will just die if I stop working on them. I have the same feeling about the packages I maintain in Debian.

What does that mean? Does that mean those packages are useless? That no one cares enough to get involved? That I'm not doing a good job at including contributors?

I don't think so. I think I'm a friendly person online, and I try my best at doing good documentation and followup on my projects. What I have come to understand is even more depressing and scary that this being a personal failure: that is the situation with everyone, everywhere. The LWN article is not talking about silly things like a chess clock or a feed reader: we're talking about the Linux input drivers. A very deep, core component of the vast majority of computers running on the planet, that depend on that single maintainer. And I'm not talking about whether those people are paid or not, that's related, but not directly the question here. The same realization occured with OpenSSL and NTP, GnuPG is in a similar situation, the list just goes on and on.

A single guy maintains those projects! Is that a fluke? A statistical anomaly? Everything I feel, and read, and know in my decades of experience with free software show me a reality that I've been trying to deny for all that time: it's the average.

My theory is this: our average bus factor is one. I don't have any hard evidence to back this up, no hard research to rely on. I'd love to be proven wrong. I'd love for this to change.

But unless economics of technology production change significantly in the coming decades, this problem will remain, and probably worsen, as we keep on scaffolding an entire civilization on shoulders of hobbyists that are barely aware their work is being used to power phones, cars, airplanes and hospitals. A lot has been written on this, but nothing seems to be moving.

And if that doesn't scare you, it damn well should. As a user, one thing you can do is, instead of wondering if you should buy a bit of proprietary software, consider using free software and donating that money to free software projects instead. Lobby governments and research institutions to sponsor only free software projects. Otherwise this civilization will collapse in a crash of spaghetti code before it even has time to get flooded over.

Jonathan Dowland: PhD Poster

16 October, 2019 - 20:53

Thumbnail of the poster

Today the School of Computing organised a poster session for stage 2 & 3 PhD candidates. Here's the poster I submitted. jdowland-phd-poster.pdf (692K)

This is the first poster I've prepared for my PhD work. I opted to follow the "BetterPoster" principles established by Mike Morrison. These are best summarized in his #BetterPoster 2 minute YouTube video. I adapted this LaTeX #BetterPoster template. This template is licensed under the GPL v3.0 which requires me to provide the source of the poster, so here it is.

After browsing around other student's posters, two things I would now add to the poster would be a mugshot (so people could easily determine who's poster it was, if they wanted to ask questions) and Red Hat's logo, to acknowledge their support of my work.

Julien Danjou: Sending Emails in Python — Tutorial with Code Examples

15 October, 2019 - 17:33

What do you need to send an email with Python? Some basic programming and web knowledge along with the elementary Python skills. I assume you’ve already had a web app built with this language and now you need to extend its functionality with notifications or other emails sending. This tutorial will guide you through the most essential steps of sending emails via an SMTP server:

  1. Configuring a server for testing (do you know why it’s important?)
  2. Local SMTP server
  3. Mailtrap test SMTP server
  4. Different types of emails: HTML, with images, and attachments
  5. Sending multiple personalized emails (Python is just invaluable for email automation)
  6. Some popular email sending options like Gmail and transactional email services

Served with numerous code examples written and tested on Python 3.7!

Sending an email using an SMTP

The first good news about Python is that it has a built-in module for sending emails via SMTP in its standard library. No extra installations or tricks are required. You can import the module using the following statement:

import smtplib

To make sure that the module has been imported properly and get the full description of its classes and arguments, type in an interactive Python session:

help(smtplib)

At our next step, we will talk a bit about servers: choosing the right option and configuring it.

An SMTP server for testing emails in Python

When creating a new app or adding any functionality, especially when doing it for the first time, it’s essential to experiment on a test server. Here is a brief list of reasons:

  1. You won’t hit your friends’ and customers’ inboxes. This is vital when you test bulk email sending or work with an email database.
  2. You won’t flood your own inbox with testing emails.
  3. Your domain won’t be blacklisted for spam.
Local SMTP server

If you prefer working in the local environment, the local SMTP debugging server might be an option. For this purpose, Python offers an smtpd module. It has a DebuggingServer feature, which will discard messages you are sending out and will print them to stdout. It is compatible with all operations systems.

Set your SMTP server to localhost:1025

python -m smtpd -n -c DebuggingServer localhost:1025

In order to run SMTP server on port 25, you’ll need root permissions:

sudo python -m smtpd -n -c DebuggingServer localhost:25

It will help you verify whether your code is working and point out the possible problems if there are any. However, it won’t give you the opportunity to check how your HTML email template is rendered.

Fake SMTP server

Fake SMTP server imitates the work of a real 3rd party web server. In further examples in this post, we will use Mailtrap. Beyond testing email sending, it will let us check how the email will  be rendered and displayed, review the message raw data as well as will provide us with a spam report. Mailtrap is very easy to set up: you will need just copy the credentials generated by the app and paste them into your code.

Here is how it looks in practice:

import smtplib

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # your password generated by Mailtrap

Mailtrap makes things even easier. Go to the Integrations section in the SMTP settings tab and get the ready-to-use template of the simple message, with your Mailtrap credentials in it. It is the most basic option of instructing your Python script on who sends what to who is the sendmail() instance method:

The code looks pretty straightforward, right? Let’s take a closer look at it and add some error handling (see the comments in between). To catch errors, we use the try and except blocks.

# The first step is always the same: import all necessary components:
import smtplib
from socket import gaierror

# Now you can play with your code. Let’s define the SMTP server separately here:
port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

# Specify the sender’s and receiver’s email addresses:
sender = "from@example.com"
receiver = "mailtrap@example.com"

# Type your message: use two newlines (\n) to separate the subject from the message body, and use 'f' to  automatically insert variables in the text
message = f"""\
Subject: Hi Mailtrap
To: {receiver}
From: {sender}
This is my first message with Python."""

try:
  # Send your message with credentials specified above
  with smtplib.SMTP(smtp_server, port) as server:
    server.login(login, password)
    server.sendmail(sender, receiver, message)
except (gaierror, ConnectionRefusedError):
  # tell the script to report if your message was sent or which errors need to be fixed
  print('Failed to connect to the server. Bad connection settings?')
except smtplib.SMTPServerDisconnected:
  print('Failed to connect to the server. Wrong user/password?')
except smtplib.SMTPException as e:
  print('SMTP error occurred: ' + str(e))
else:
  print('Sent')

Once you get the Sent result in Shell, you should see your message in your Mailtrap inbox:

Sending emails with HTML content

In most cases, you need to add some formatting, links, or images to your email notifications. We can simply put all of these with the HTML content. For this purpose, Python has an email package.

We will deal with the MIME message type, which is able to combine HTML and plain text. In Python, it is handled by the email.mime module.

It is better to write a text version and an HTML version separately, and then merge them with the MIMEMultipart("alternative") instance. It means that such a message has two rendering options accordingly. In case an HTML isn’t be rendered successfully for some reason, a text version will still be available.

import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart("alternative")
message["Subject"] = "multipart test"
message["From"] = sender_email
message["To"] = receiver_email
# Write the plain text part
text = """\ Hi, Check out the new post on the Mailtrap blog: SMTP Server for Testing: Cloud-based or Local? https://blog.mailtrap.io/2018/09/27/cloud-or-local-smtp-server/ Feel free to let us know what content would be useful for you!"""

# write the HTML part
html = """\ <html> <body> <p>Hi,<br> Check out the new post on the Mailtrap blog:</p> <p><a href="https://blog.mailtrap.io/2018/09/27/cloud-or-local-smtp-server">SMTP Server for Testing: Cloud-based or Local?</a></p> <p> Feel free to <strong>let us</strong> know what content would be useful for you!</p> </body> </html> """

# convert both parts to MIMEText objects and add them to the MIMEMultipart message
part1 = MIMEText(text, "plain")
part2 = MIMEText(html, "html")
message.attach(part1)
message.attach(part2)

# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail( sender_email, receiver_email, message.as_string() )

print('Sent')
The resulting outputSending Emails with Attachments in Python

The next step in mastering sending emails with Python is attaching files. Attachments are still the MIME objects but we need to encode them with the base64 module. A couple of important points about the attachments:

  1. Python lets you attach text files, images, audio files, and even applications. You just need to use the appropriate email class like email.mime.audio.MIMEAudio or email.mime.image.MIMEImage. For the full information, refer to this section of the Python documentation.
  2. Remember about the file size: sending files over 20MB is a bad practice.

In transactional emails, the PDF files are the most frequently used: we usually get receipts, tickets, boarding passes, order confirmations, etc. So let’s review how to send a boarding pass as a PDF file.

import smtplib
from email import encoders
from email.mime.base import MIMEBase
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

subject = "An example of boarding pass"
sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart()
message["From"] = sender_email
message["To"] = receiver_email
message["Subject"] = subject

# Add body to email
body = "This is an example of how you can send a boarding pass in attachment with Python"
message.attach(MIMEText(body, "plain"))

filename = "yourBP.pdf"
# Open PDF file in binary mode
# We assume that the file is in the directory where you run your Python script from
with open(filename, "rb") as attachment:
# The content type "application/octet-stream" means that a MIME attachment is a binary file
part = MIMEBase("application", "octet-stream")
part.set_payload(attachment.read())
# Encode to base64
encoders.encode_base64(part)
# Add header
part.add_header("Content-Disposition", f"attachment; filename= {filename}")
# Add attachment to your message and convert it to string
message.attach(part)

text = message.as_string()
# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail(sender_email, receiver_email, text)

print('Sent')
The received email with your PDF

To attach several files, you can call the message.attach() method several times.

How to send an email with image attachment

Images, even if they are a part of the message body, are attachments as well. There are three types of them: CID attachments (embedded as a MIME object), base64 images (inline embedding), and linked images.

For adding a CID attachment, we will create a MIME multipart message with MIMEImage component:

import smtplib
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart("alternative")
message["Subject"] = "CID image test"
message["From"] = sender_email
message["To"] = receiver_email

# write the HTML part
html = """\
<html>
<body>
<img src="cid:myimage">
</body>
</html>
"""
part = MIMEText(html, "html")
message.attach(part)

# We assume that the image file is in the same directory that you run your Python script from
with open('mailtrap.jpg', 'rb') as img:
  image = MIMEImage(img.read())
# Specify the  ID according to the img src in the HTML part
image.add_header('Content-ID', '<myimage>')
message.attach(image)

# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail(sender_email, receiver_email, message.as_string())

print('Sent')
The received email with CID image

The CID image is shown both as a part of the HTML message and as an attachment. Messages with this image type are often considered spam: check the Analytics tab in Mailtrap to see the spam rate and recommendations on its improvement. Many email clients — Gmail in particular — don’t display CID images in most cases. So let’s review how to embed a base64 encoded image instead.

Here we will use base64 module and experiment with the same image file:

import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
import base64

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap
sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart("alternative")
message["Subject"] = "inline embedding"
message["From"] = sender_email
message["To"] = receiver_email

# We assume that the image file is in the same directory that you run your Python script from
with open("image.jpg", "rb") as image:
  encoded = base64.b64encode(image.read()).decode()

html = f"""\
<html>
<body>
<img src="data:image/jpg;base64,{encoded}">
</body>
</html>
"""
part = MIMEText(html, "html")
message.attach(part)

# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail(sender_email, receiver_email, message.as_string())

print('Sent')
A base64 encoded image

Now the image is embedded into the HTML message and is not available as an attached file. Python has encoded our JPEG image, and if we go to the HTML Source tab, we will see the long image data string in the img src attribute.

How to Send Multiple Emails

Sending multiple emails to different recipients and making them personal is the special thing about emails in Python.

To add several more recipients, you can just type their addresses in separated by a comma, add Cc and Bcc. But if you work with a bulk email sending, Python will save you with loops.

One of the options is to create a database in a CSV format (we assume it is saved to the same folder as your Python script).

We often see our names in transactional or even promotional examples. Here is how we can make it with Python.

Let’s organize the list in a simple table with just two columns: name and email address. It should look like the following example:

#name,email
John Johnson,john@johnson.com
Peter Peterson,peter@peterson.com

The code below will open the file and loop over its rows line by line, replacing the {name} with the value from the “name” column.

import csv
import smtplib

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

message = """Subject: Order confirmation
To: {recipient}
From: {sender}
Hi {name}, thanks for your order! We are processing it now and will contact you soon"""
sender = "new@example.com"
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  with open("contacts.csv") as file:
  reader = csv.reader(file)
  next(reader)  # it skips the header row
  for name, email in reader:
    server.sendmail(
      sender,
      email,
      message.format(name=name, recipient=email, sender=sender),
    )
    print(f'Sent to {name}')

In our Mailtrap inbox, we see two messages: one for John Johnson and another for Peter Peterson, delivered simultaneously:


Sending emails with Python via Gmail

When you are ready for sending emails to real recipients, you can configure your production server. It also depends on your needs, goals, and preferences: your localhost or any external SMTP.

One of the most popular options is Gmail so let’s take a closer look at it.

We can often see titles like “How to set up a Gmail account for development”. In fact, it means that you will create a new Gmail account and will use it for a particular purpose.

To be able to send emails via your Gmail account, you need to provide access to it for your application. You can Allow less secure apps or take advantage of the OAuth2 authorization protocol. It’s a way more difficult but recommended due to the security reasons.

Further, to use a Gmail server, you need to know:

  • the server name = smtp.gmail.com
  • port = 465 for SSL/TLS connection (preferred)
  • or port = 587 for STARTTLS connection
  • username = your Gmail email address
  • password = your password
import smtplib
import ssl

port = 465
password = input("your password")
context = ssl.create_default_context()

with smtplib.SMTP_SSL("smtp.gmail.com", port, context=context) as server:
  server.login("my@gmail.com", password)

If you tend to simplicity, then you can use Yagmail, the dedicated Gmail/SMTP. It makes email sending really easy. Just compare the above examples with these several lines of code:

import yagmail

yag = yagmail.SMTP()
contents = [
"This is the body, and here is just text http://somedomain/image.png",
"You can find an audio file attached.", '/local/path/to/song.mp3'
]
yag.send('to@someone.com', 'subject', contents)
Next steps with Python

Those are just basic options of sending emails with Python. To get great results, review the Python documentation and experiment with your own code!

There are a bunch of various Python frameworks and libraries, which make creating apps more elegant and dedicated. In particular, some of them can help improve your experience with building emails sending functionality:

The most popular frameworks are:

  1. Flask, which offers a simple interface for email sending: Flask Mail.
  2. Django, which can be a great option for building HTML templates.
  3. Zope comes in handy for a website development.
  4. Marrow Mailer is a dedicated mail delivery framework adding various helpful configurations.
  5. Plotly and its Dash can help with mailing graphs and reports.

Also, here is a handy list of Python resources sorted by their functionality.

Good luck and don’t forget to stay on the safe side when sending your emails!

This article was originally published at Mailtrap’s blog: Sending emails with Python

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, September 2019

15 October, 2019 - 14:20


Like each month, here comes a report about
the work of paid contributors
to Debian LTS.

Individual reports

In September, 212.75 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Adrian Bunk did nothing (and got no hours assigned), but has been carrying 26h from August to October.
  • Ben Hutchings did 20h (out of 20h assigned).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 30h (out of 23.75h assigned and 5.25h from August), thus anticipating 1h from October.
  • Hugo Lefeuvre did nothing (out of 23.75h assigned), thus is carrying over 23.75h for October.
  • Jonas Meurer did 5h (out of 10h assigned and 9.5h from August), thus carrying over 14.5h to October.
  • Markus Koschany did 23.75h (out of 23.75h assigned).
  • Mike Gabriel did 11h (out of 12h assigned + 0.75h remaining), thus carrying over 1.75h to October.
  • Ola Lundqvist did 2h (out of 8h assigned and 8h from August), thus carrying over 14h to October.
  • Roberto C. Sánchez did 16h (out of 16h assigned).
  • Sylvain Beucler did 23.75h (out of 23.75h assigned).
  • Thorsten Alteholz did 23.75h (out of 23.75h assigned).
Evolution of the situation

September was more like a regular month again, though two contributors were not able to dedicate any time to LTS work.

For October we are welcoming Utkarsh Gupta as a new paid contributor. Welcome to the team, Utkarsh!

This month, we’re glad to announce that Cloudways is joining us as a new silver level sponsor ! With the loss of another long term sponsor, We are still at 216 hours sponsored by month, this new sponsor is just replacing another sponsor that stopped after multiple years of contribution.

The security tracker currently lists 32 packages with a known CVE and the dla-needed.txt file has 37 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Norbert Preining: State of Calibre in Debian

15 October, 2019 - 12:25

To counter some recent FUD spread about Calibre in general and Calibre in Debian in particular, here a concise explanation of the current state.

Many might have read my previous post on Calibre as a moratorium, but that was not my intention. Development of Calibre in Debian is continuing, despite the current stall.

Since it seems to be unclear what the current blockers are, there are two orthogonal problems regarding recent Calibre in Debian: One is the update to version 4 and the switch to qtwebengine, one is the purge of Python 2 from Debian.

Current state

Debian sid and testing currently hold Calibre 3.48 based on Python 2. Due to the ongoing purge, necessary modules (in particular python-cherrypy3) have been removed from Debian/sid, making the current Calibre package RC buggy (see this bug report). That means, that within reasonable time frame, Calibre will be removed from testing.

Now for the two orthogonal problems we are facing:

Calibre 4 packaging

Calibre 4 is already packaged for Debian (see the master-4.0 branch in the git repository). Uploading was first blocked due to a disappearing python-pyqt5.qwebengine which was extracted from PyQt5 package into its own. Thanks to the maintainers we now have a Python2 version build from the qtwebengine-opensource-src package.

But that still doesn’t cut it for Calibre 4, because it requires Qt 5.12, but Debian still carries 5.11 (released 1.5 years ago).

So the above mentioned branch is ready for upload as soon as Qt 5.12 is included in Debian.

Python 3

The other big problem is the purge of Python 2 from Debian. Upstream Calibre already supports building Python 3 versions since some months, with ongoing bug fixes. But including this into Debian poses some problems: The first stumbling block was a missing Python3 version of mechanize, which I have adopted after a 7 years hiatus, updated to the newest version and provided Python 3 modules for it.

Packaging for Debian is done in the experimental branch of the git repository, and is again ready to be uploaded to unstable.

But the much bigger concern here is that practically no plugin of Calibre is ready for Python 3. Paired with the fact that probably most users of Calibre are using one or the other plugin, uploading a Python 3 based version of Calibre would break usage for practically all users.

Since I put our (Debian’s) users first, I have thus decided to keep Calibre based on Python 2 as long as Debian allows. Unfortunately the overzealous purge spree has already introduced RC bugs, which means I am now forced to decide whether I upload a version of Calibre that breaks most users, or I don’t upload and see Calibre removed from testing. Not an easy decision.

Thus, my original plan was to keep Calibre based on Python 2 as long as possible, and hope that upstream switches to Python 3 in time before the next Debian release. This would trigger a continuous update of most plugins and would allow users in Debian to have a seamless transition without complete breakage. Unfortunately, this plan seems to be not actually executable.

Now let us return to the FUD spread:

  • Calibre is actively developed upstream
  • Calibre in Debian is actively maintained
  • Calibre is Python 3 ready, but the plugins are not
  • Calibre 4 is ready for Debian as soon as the dependencies are updated
  • Calibre/Python3 is ready for upload to Debian, but breaks practically all users

Hope that helps everyone to gain some understanding about the current state of Calibre in Debian.

Sergio Durigan Junior: Installing Gerrit and Keycloak for GDB

15 October, 2019 - 11:00

Back in September, we had the GNU Tools Cauldron in the gorgeous city of Montréal (perhaps I should write a post specifically about it...). One of the sessions we had was the GDB BoF, where we discussed, among other things, how to improve our patch review system.

I have my own personal opinions about the current review system we use (mailing list-based, in a nutshell), and I haven't felt very confident to express it during the discussion. Anyway, the outcome was that at least 3 global maintainers have used or are currently using the Gerrit Code Review system for other projects, are happy with it, and that we should give it a try. Then, when it was time to decide who wanted to configure and set things up for the community, I volunteered. Hey, I'm already running the Buildbot master for GDB, what is the problem to manage yet another service? Oh, well.

Before we dive into the details involved in configuring and running gerrit in a machine, let me first say that I don't totally support the idea of migrating from mailing list to gerrit. I volunteered to set things up because I felt the community (or at least the its most active members) wanted to try it out. I don't necessarily agree with the choice.

Ah, and I'm writing this post mostly because I want to be able to close the 300+ tabs I had to open on my Firefox during these last weeks, when I was searching how to solve the myriad of problems I faced during the set up!

The initial plan

My very initial plan after I left the session room was to talk to the sourceware.org folks and ask them if it would be possible to host our gerrit there. Surprisingly, they already have a gerrit instance up and running. It's been set up back in 2016, it's running an old version of gerrit, and is pretty much abandoned. Actually, saying that it has been configured is an overstatement: it doesn't support authentication, user registration, barely supports projects, etc. It's basically what you get from a pristine installation of the gerrit RPM package in RHEL 6.

I won't go into details here, but after some discussion it was clear to me that the instance on sourceware would not be able to meet our needs (or at least what I had in mind for us), and that it would be really hard to bring it to the quality level I wanted. I decided to go look for other options.

The OSCI folks

Have I mentioned the OSCI project before? They are absolutely awesome. I really love working with them, because so far they've been able to meet every request I made! So, kudos to them! They're the folks that host our GDB Buildbot master. Their infrastructure is quite reliable (I never had a single problem), and Marc Dequénes (Duck) is very helpful, friendly and quick when replying to my questions :-).

So, it shouldn't come as a surprise the fact that when I decided to look for other another place to host gerrit, they were my first choice. And again, they delivered :-).

Now, it was time to start thinking about the gerrit set up.

User registration?

Over the course of these past 4 weeks, I had the opportunity to learn a bit more about how gerrit does things. One of the first things that negatively impressed me was the fact that gerrit doesn't handle user registration by itself. It is possible to have a very rudimentary user registration "system", but it relies on the site administration manually registering the users (via htpasswd) and managing everything by him/herself.

It was quite obvious to me that we would need some kind of access control (we're talking about a GNU project, with a copyright assignment requirement in place, after all), and the best way to implement it is by having registered users. And so my quest for the best user registration system began...

Gerrit supports some user authentication schemes, such as OpenID (not OpenID Connect!), OAuth2 (via plugin) and LDAP. I remembered hearing about FreeIPA a long time ago, and thought it made sense using it. Unfortunately, the project's community told me that installing FreeIPA on a Debian system is really hard, and since our VM is running Debian, it quickly became obvious that I should look somewhere else. I felt a bit sad at the beginning, because I thought FreeIPA would really be our silver bullet here, but then I noticed that it doesn't really offer a self-service user registration.

After exchanging a few emails with Marc, he told me about Keycloak. It's a full-fledged Identity Management and Access Management software, supports OAuth2, LDAP, and provides a self-service user registration system, which is exactly what we needed! However, upon reading the description of the project, I noticed that it is written in Java (JBOSS, to be more specific), and I was afraid that it was going to be very demanding on our system (after all, gerrit is also a Java program). So I decided to put it on hold and take a look at using LDAP...

Oh, man. Where do I start? Actually, I think it's enough to say that I just tried installing OpenLDAP, but gave up because it was too cumbersome to configure. Have you ever heard that LDAP is really complicated? I'm afraid this is true. I just didn't feel like wasting a lot of time trying to understand how it works, only to have to solve the "user registration" problem later (because of course, OpenLDAP is just an LDAP server).

OK, so what now? Back to Keycloak it is. I decided that instead of thinking that it was too big, I should actually install it and check it for real. Best decision, by the way!

Setting up Keycloak

It's pretty easy to set Keycloak up. The official website provides a .tar.gz file which contains the whole directory tree for the project, along with helper scripts, .jar files, configuration, etc. From there, you just need to follow the documentation, edit the configuration, and voilà.

For our specific setup I chose to use PostgreSQL instead of the built-in database. This is a bit more complicated to configure, because you need to download the JDBC driver, and install it in a strange way (at least for me, who is used to just editing a configuration file). I won't go into details on how to do this here, because it's easy to find on the internet. Bear in mind, though, that the official documentation is really incomplete when covering this topic! This is one of the guides I used, along with this other one (which covers MariaDB, but can be adapted to PostgreSQL as well).

Another interesting thing to notice is that Keycloak expects to be running on its own virtual domain, and not under a subdirectory (e.g, https://example.org instead of https://example.org/keycloak). For that reason, I chose to run our instance on another port. It is supposedly possible to configure Keycloak to run under a subdirectory, but it involves editing a lot of files, and I confess I couldn't make it fully work.

A last thing worth mentioning: the official documentation says that Keycloak needs Java 8 to run, but I've been using OpenJDK 11 without problems so far.

Setting up Gerrit

The fun begins now!

The gerrit project also offers a .war file ready to be deployed. After you download it, you can execute it and initialize a gerrit project (or application, as it's called). Gerrit will create a directory full of interesting stuff; the most important for us is the etc/ subdirectory, which contains all of the configuration files for the application.

After initializing everything, you can try starting gerrit to see if it works. This is where I had my first trouble. Gerrit also requires Java 8, but unlike Keycloak, it doesn't work out of the box with OpenJDK 11. I had to make a small but important addition in the file etc/gerrit.config:

[container]
    ...
    javaOptions = "--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED"
    ...

After that, I was able to start gerrit. And then I started trying to set it up for OAuth2 authentication using Keycloak. This took a very long time, unfortunately. I was having several problems with Gerrit, and I wasn't sure how to solve them. I tried asking for help on the official mailing list, and was able to make some progress, but in the end I figured out what was missing: I had forgotten to add the AddEncodedSlashes On in the Apache configuration file! This was causing a very strange error on Gerrit (as you can see, a java.lang.StringIndexOutOfBoundsException!), which didn't make sense. In the end, my Apache config file looks like this:

<VirtualHost *:80>
    ServerName gnutoolchain-gerrit.osci.io

    RedirectPermanent / https://gnutoolchain-gerrit.osci.io/r/
</VirtualHost>

<VirtualHost *:443>
    ServerName gnutoolchain-gerrit.osci.io

    RedirectPermanent / /r/

    SSLEngine On
    SSLCertificateFile /path/to/cert.pem
    SSLCertificateKeyFile /path/to/privkey.pem
    SSLCertificateChainFile /path/to/chain.pem

    # Good practices for SSL
    # taken from: <https://mozilla.github.io/server-side-tls/ssl-config-generator/>

    # intermediate configuration, tweak to your needs
    SSLProtocol             all -SSLv3
    SSLCipherSuite          ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
    SSLHonorCipherOrder     on
    SSLCompression          off
    SSLSessionTickets       off

    # OCSP Stapling, only in httpd 2.3.3 and later
    #SSLUseStapling          on
    #SSLStaplingResponderTimeout 5
    #SSLStaplingReturnResponderErrors off
    #SSLStaplingCache        shmcb:/var/run/ocsp(128000)

    # HSTS (mod_headers is required) (15768000 seconds = 6 months)
    Header always set Strict-Transport-Security "max-age=15768000"

    ProxyRequests Off
    ProxyVia Off
    ProxyPreserveHost On
    <Proxy *>
        Require all granted
    </Proxy>

    AllowEncodedSlashes On
        ProxyPass /r/ http://127.0.0.1:8081/ nocanon
        #ProxyPassReverse /r/ http://127.0.0.1:8081/r/
</VirtualHost>

I confess I was almost giving up Keycloak when I finally found the problem...

Anyway, after that things went more smoothly. I was finally able to make the user authentication work, then I made sure Keycloak's user registration feature also worked OK...

Ah, one interesting thing: the user logout wasn't really working as expected. The user was able to logout from gerrit, but not from Keycloak, so when the user clicked on "Sign in", Keycloak would tell gerrit that the user was already logged in, and gerrit would automatically log the user in again! I was able to solve this by redirecting the user to Keycloak's logout page, like this:

[auth]
    ...
    logoutUrl = https://keycloak-url:port/auth/realms/REALM/protocol/openid-connect/logout?redirect_uri=https://gerrit-url/
    ...

After that, it was already possible to start worrying about configure gerrit itself. I don't know if I'll write a post about that, but let me know if you want me to.

Conclusion

If you ask me if I'm totally comfortable with the way things are set up now, I can't say that I am 100%. I mean, the set up seems robust enough that it won't cause problems in the long run, but what bothers me is the fact that I'm using technologies that are alien to me. I'm used to setting up things written in Python, C, C++, with very simple yet powerful configuration mechanisms, and an easy to discover what's wrong when something bad happens.

I am reasonably satisfied with the Keycloak logs things, but Gerrit leaves a lot to be desired in that area. And both projects are written in languages/frameworks that I am absolutely not comfortable with. Like, it's really tough to debug something when you don't even know where the code is or how to modify it!

All in all, I'm happy that this whole adventure has come to an end, and now all that's left is to maintain it. I hope that the GDB community can make good use of this new service, and I hope that we can see a positive impact in the quality of the whole patch review process.

My final take is that this is all worth as long as the Free Software and the User Freedom are the ones who benefit.

P.S.: Before I forget, our gerrit instance is running at https://gnutoolchain-gerrit.osci.io.

Chris Lamb: Tour d'Orwell: The River Orwell

15 October, 2019 - 07:19

Continuing my Orwell-themed peregrination, a certain Eric Blair took his pen name "George Orwell" because of his love for a certain river just south of Ipswich, Suffolk. With sheepdog trials being undertaken in the field underneath, even the concrete Orwell Bridge looked pretty majestic from the garden centre — cum — food hall.

Martin Pitt: Hardening Cockpit with systemd (socket activation)³

15 October, 2019 - 07:00
Background A major future goal for Cockpit is support for client-side TLS authentication, primarily with smart cards. I created a Proof of Concept and a demo long ago, but before this can be called production-ready, we first need to harden Cockpit’s web server cockpit-ws to be much more tamper-proof than it is today. This heavily uses systemd’s socket activation. I believe we are now using this in quite a unique and interesting way that helped us to achieve our goal rather elegantly and robustly.

Arturo Borrero González: What to expect in Debian 11 Bullseye for nftables/iptables

15 October, 2019 - 00:00

Debian 11 codename Bullseye is already in the works. Is interesting to make decision early in the development cycle to give people time to accommodate and integrate accordingly, and this post brings you the latest update on the plans for Netfilter software in Debian 11 Bullseye. Mind that Bullseye is expected to be released somewhere in 2021, so still plenty of time ahead.

The situation with the release of Debian 10 Buster is that iptables was using by default the -nft backend and one must explicitly select -legacy in the alternatives system in case of any problem. That was intended to help people migrate from iptables to nftables. Now the question is what to do next.

Back in July 2019, I started an email thread in the debian-devel@lists.debian.org mailing lists looking for consensus on lowering the archive priority of the iptables package in Debian 11 Bullseye. My proposal is to drop iptables from Priority: important and promote nftables instead.

In general, having such a priority level means the package is installed by default in every single Debian installation. Given that we aim to deprecate iptables and that starting with Debian 10 Buster iptables is not even using the x_tables kernel subsystem but nf_tables, having such priority level seems pointless and inconsistent. There was agreement, and I already made the changes to both packages.

This is another step in deprecating iptables and welcoming nftables. But it does not mean that iptables won’t be available in Debian 11 Bullseye. If you need it, you will need to use aptitude install iptables to download and install it from the package repository.

The second part of my proposal was to promote firewalld as the default ‘wrapper’ for firewaling in Debian. I think this is in line with the direction other distros are moving. It turns out firewalld integrates pretty well with the system, includes a DBus interface and many system daemons (like libvirt) already has native integration with firewalld. Also, I believe the days of creating custom-made scripts and hacks to handle the local firewall may be long gone, and firewalld should be very helpful here too.

Ritesh Raj Sarraf: Bpfcc New Release

14 October, 2019 - 16:24
BPF Compiler Collection 0.11.0

bpfcc version 0.11.0 has been uploaded to Debian Unstable and should be accessible in the repositories by now. After the 0.8.0 release, this has been the next one uploaded to Debian.

Multiple source respositories

This release brought in dependencies to another set of sources from the libbpf project. In the upstream repo, this is still a topic of discussion on how to release tools where one depends on another, in unison. Right now, libbpf is configured as a git submodule in the bcc repository. So anyone using the upstream git repoistory should be able to build it.

Multiple source archive for a Debian package

So I had read in the past about Multiple source tarballs for a single package in Debian but never tried it because I wasn’t maintaining anything in Debian which was such. With bpfcc it was now a good opportunity to try it out. First, I came across this post from RaphaĂŤl Hertzog which gives a good explanation of what all has been done. This article was very clear and concise on the topic

Git Buildpackage

gbp is my tool of choice for packaging in Debian. So I did a quick look to check how gbp would take care of it. And everything was in place and Just Worked

rrs@priyasi:~/rrs-home/Community/Packaging/bpfcc (master)$ gbp buildpackage --git-component=libbpf
gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig.tar.gz
gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig-libbpf.tar.gz
gbp:info: Performing the build
dpkg-checkbuilddeps: error: Unmet build dependencies: arping clang-format cmake iperf libclang-dev libedit-dev libelf-dev libzip-dev llvm-dev libluajit-5.1-dev luajit python3-pyroute2
W: Unmet build-dependency in source
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: info: applying fix-install-path.patch
dh clean --buildsystem=cmake --with python3 --no-parallel
   dh_auto_clean -O--buildsystem=cmake -O--no-parallel
   dh_autoreconf_clean -O--buildsystem=cmake -O--no-parallel
   dh_clean -O--buildsystem=cmake -O--no-parallel
dpkg-source: info: using source format '3.0 (quilt)'
dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig-libbpf.tar.gz
dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig.tar.gz
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: warning: ignoring deletion of directory src/cc/libbpf
dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.debian.tar.xz
dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.dsc
I: Generating source changes file for original dsc
dpkg-genchanges: info: including full source code in upload
dpkg-source: info: unapplying fix-install-path.patch
ERROR: ld.so: object 'libeatmydata.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
W: cgroups are not available on the host, not using them.
I: pbuilder: network access will be disabled during build
I: Current time: Sun Oct 13 19:53:57 IST 2019
I: pbuilder-time-stamp: 1570976637
I: Building the build Environment
I: extracting base tarball [/var/cache/pbuilder/sid-amd64-base.tgz]
I: copying local configuration
I: mounting /proc filesystem
I: mounting /sys filesystem
I: creating /{dev,run}/shm
I: mounting /dev/pts filesystem
I: redirecting /dev/ptmx to /dev/pts/ptmx
I: Mounting /var/cache/apt/archives/
I: policy-rc.d already exists
W: Could not create compatibility symlink because /tmp/buildd exists and it is not a directory
I: using eatmydata during job
I: Using pkgname logfile
I: Current time: Sun Oct 13 19:54:04 IST 2019
I: pbuilder-time-stamp: 1570976644
I: Setting up ccache
I: Copying source file
I: copying [../bpfcc_0.11.0-1.dsc]
I: copying [../bpfcc_0.11.0.orig-libbpf.tar.gz]
I: copying [../bpfcc_0.11.0.orig.tar.gz]
I: copying [../bpfcc_0.11.0-1.debian.tar.xz]
I: Extracting source
dpkg-source: warning: extracting unsigned source package (bpfcc_0.11.0-1.dsc)
dpkg-source: info: extracting bpfcc in bpfcc-0.11.0
dpkg-source: info: unpacking bpfcc_0.11.0.orig.tar.gz
dpkg-source: info: unpacking bpfcc_0.11.0.orig-libbpf.tar.gz
dpkg-source: info: unpacking bpfcc_0.11.0-1.debian.tar.xz
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: info: applying fix-install-path.patch
I: Not using root during the build.

Debian XMPP Team: New Dino in Debian

14 October, 2019 - 05:00

Dino (dino-im in Debian), the modern and beautiful chat client for the desktop, has some nice, new features. Users of Debian testing (bullseye) might like to try them:

  • XEP-0391: Jingle Encrypted Transports (explained here)
  • XEP-0402: Bookmarks 2 (explained here)

Note, that users of Dino on Debian 10 (buster) should upgrade to version 0.0.git20181129-1+deb10u1, because of a number of security issues, that have been found (CVE-2019-16235, CVE-2019-16236, CVE-2019-16237).

There have been other XMPP related updates in Debian since release of buster, among them:

You might be interested in the Octobers XMPP newsletter, also available in German.

Utkarsh Gupta: Joining Debian LTS!

14 October, 2019 - 02:41

Hey,

(DPL Style):
TL;DR: I joined Debian LTS as a trainee in July (during DebConf) and finally as a paid contributor from this month onward! :D

Here’s something interesting that happened last weekend!
Back during the good days of DebConf19, I finally got a chance to meet Holger! As amazing and inspiring a person he is, it was an absolute pleasure meeting him and also, I got a chance to talk about Debian LTS in more detail.

I was introduced to Debian LTS by Abhijith during his talk in MiniDebConf Delhi. And since then, I’ve been kinda interested in that project!
But finally it was here that things got a little “official” and after a couple of mail exchanges with Holger and Raphael, I joined in as a trainee!

I had almost no idea what to do next, so the next month I stayed silent, observing the workflow as people kept committing and announcing updates.
And finally in September, I started triaging and fixing the CVEs for Jessie and Stretch (mostly the former).

Thanks to Abhijith who explained the basics of what DLA is and how do we go about fixing bugs and then announcing them.

With that, I could fix a couple of CVEs and thanks to Holger (again) for reviewing and sponsoring the uploads! :D

I mostly worked (as a trainee) on:

  • CVE-2019-10751, affecting httpie, and
  • CVE-2019-16680, affecting file-roller.

And finally this happened:
(Though there’s a little hiccup that happened there, but that’s something we can ignore!)

So finally, I’ll be working with the team from this month on!
As Holger says, very much yay! :D

Until next time.
:wq for today.

Iustin Pop: Actually fixing a bug

14 October, 2019 - 01:43

One of the outcomes of my recent (last few years) sports ramp-up is that my opensource work is almost entirely left aside. Having an office job makes it very hard to spend more time sitting at the computer at home too…

So even my travis dashboard was red for a while now, but I didn’t look into it until today. Since I didn’t change anything recently, just travis builds started to fail, I was sure it’s just environment changes that need to be taken into account.

And indeed it was so, for two out of three projects. The third one… I actually got to fix a bug, introduced at the beginning of the year, but for which gcc (same gcc that originally passed) started to trip on a while back. I even had to read the man page of snprintf! Was fun ☺, too bad I don’t have enough time to do this more often…

My travis dashboard is green again, and “test suite” (if you can call it that) is expanded to explicitly catch this specific problem in the future.

Shirish Agarwal: Social media, knowledge and some history of Banking

13 October, 2019 - 05:58

First of all Happy Dusshera to everybody. While Dusshera is India is a symbol of many things, it is a symbol of forgiveness and new beginnings. While I don’t know about new beginnings I do feel there is still lot of baggage which needs to be left I would try to share some insights I uncovered over last few months and few realizations I came across.

First of all thank you to the Debian-gnome-team to keep working at new version of packages. While there are still a bunch of bugs which need to be fixed especially #895990 and #913978 among others, still kudos for working at it. Hopefully, those bugs and others will be fixed soon so we could install gnome without a hiccup. I have not been on IRC because my riot-web has been broken for several days now. Also most of the IRC and telegram channels at least related to Debian become mostly echo chambers one way or the other as you do not get any serious opposition. On twitter, while it’s highly toxic, you also get the urge to fight the good fight when either due to principles or for some other reason (usually paid trolls) people fight, While I follow my own rules on twitter apart from their TOS, I feel at least new people who are going on social media in India or perhaps elsewhere as well could use are –

  1. It is difficult to remain neutral and stick to the facts. If you just stick to the facts, you will be branded as urban naxal or some such names.
  2. I find many times, if you are calm and don’t react, many a times, they are curious and display ignorance of knowledge which you thought everyone knew is not there. Now whether that is due to either due to lack of education, lack of knowledge or pretensions, although if its pretentious, you are caught sooner or later.
  3. Be civil at all times, if somebody harassess you, calls you names, report them and block them, although twitter still needs to fix the reporting thing a whole lot more. Although, when even somebody like me (bit of understanding of law, technology, language etc.) had a hard time figuring out twitter’s reporting ways, I dunno how many people would be able to use it successfully ? Maybe they make it so unhelpful so the traffic flows no matter what. I do realize they still haven’t figured out their business model but that’s a question for another day. In short, they need to make it far more simpler than it is today.
  4. You always have an option to block people but it has its own consequences.
  5. Be passive-aggressive if the situation demands it.
  6. Most importantly though, if somebody starts making jokes about you or start abusing you, it is sure that the person on the other side doesn’t have any more arguments and you have won.
Banking

Before I start, let me share why I am putting a blog post on the topic. The reason is pretty simple. It seems a huge number of Indians don’t either know the history of how banking started, the various turns it took and so on and so forth. In fact, nowadays history is being so hotly contested and perhaps even being re-written. Hence for some things I would be sharing some sources but even within them, there is possibiity of contestations. One of the contestations for a long time is when ancient coinage and the technique of smelting, flattening came to India. Depending on whom you ask, you have different answers. Lot of people are waiting to get more insight from the Keezhadi excavation which may also give some insight to the topic as well. There are rumors that the funding is being stopped but hope that isn’t true and we gain some more insight in Indian history. In fact, in South India, there seems to be lot of curiousity and attraction towards the site. It is possible that the next time I get a chance to see South India, I may try to see if there is a chance to see this unique location if a museum gets built somewhere nearby. Sorry from deviating from the topic, but it seems that ancient coinage started anywhere between 1st millenium BCE to 6th century BCE so it could be anywhere between 1500 – 2000 years old in India. While we can’t say anything for sure, but it’s possible that there was barter before that. There has also been some history about sharing tokens in different parts of the world as well. The various timelines get all jumbled up hence I would suggest people to use the wikipedia page of History of Money as a starting point. While it may not be give a complete, it would probably broaden the understanding a little bit. One of the reasons why history is so hotly contested could also perhaps lie because of the destruction of the Ancient Library of Alexandria. Who knows what more we would have known of our ancients if it was not destroyed

Hundi (16th Centry)

I am jumping to 16th century as it is more closer to today’s modern banking otherwise the blog post would be too long. Now Hundi was a financial instrument which was used from 16th century onwards. This could be as either a forbearer of a cheque or a Traveller’s cheque. There doesn’t seem to be much in way of information, whether this was introduced by the Britishers or before by the Portugese when they came to India in via when the Portugese came when they came to India or was it in prevelance before. There is a fascinating in-depth study of Hundi though between 1858-1978 done by Marina Bernadette for London School of Economics as her dissertion paper.

Banias and Sarafs

As I had shared before, history in India is intertwined with mythology and history. While there is possibility of a lot of history behind this is documented somewhere I haven’t been able to find it. As I come from Bania , I had learnt lot of stories about both the migratory strain that Banias had as well as how banias used to split their children in adjoining states. Before the Britishers ruled over India, popular history tells us that it was Mughal emprire that ruled over us. What it doesn’t tell us though that both during the Mughal empire as well as Britishers, Banias and Sarafs who were moneylenders and bullion traders respectively hedged their bets. More so, if they were in royal service or bound to be close to the power of administration of the state/mini-kingdom/s . What they used to do is make sure that one of the sons would obey the king here while the other son may serve the muslim ruler. The idea behind that irrespective of whoever wins, the banias or sarafs would be able to continue their traditions and it was very much possible that the other brother would not be killed or even if he was, any or all wealth will pass to the victorious brother and the family name will live on. If I were to look at that, I’m sure I’ll find the same not only in Banias and Sarafs but perhaps other castes and communities as well. Modern history also tells of Rothschilds who did and continue to be an influence on the world today.

As to why did I share about how people acted in their self-interest because nowadays in the Indian social media, it is because many people chose to believe a very simplistic black and white narrative and they are being fed that by today’s dominant political party in power. What I’m trying to simply say is that history is much more complex than that. While you may choose to believe either of the beliefs, it might open a window in at least some Indian’s minds that there is possibility of another way things were done and ways in which people acted then what is being perceived today. It is also possible it may be contested today as lot of people would like to appear in the ‘right side’ of history as it seems today.

Banking in British Raj till nationalization

When the Britishers came, they bought the modern Banking system with them. These lead to creation of various banks like Bank of Bengal, Bank of Bombay and Bank of Madras which was later subsumed under Imperial Bank of India which later became State Bank of India in 1955. While I will not go into details, I do have curiousity so if life has, would surely want to visit either the Banca Monte dei Paschi di Siena S.p.A of Italy or the Berenberg Bank both of which probably has lot of history than what is written on their wikipedi pages. Soon, other banks. Soon there was whole clutch of banks which will continue to facilitate the British till independance and leak money overseas even afterwards till the Banks were nationalized in 1956 due to the ‘Gorwala Committee’ which recommended. Apart from the opaqueness of private banking and leakages, there was non provision of loans to priority sector i.e. farming in India, A.D. Gorawala recommended nationalization to weed out both issues in a single solution. One could debate efficacy of the same, but history has shown us that privatization in financial sector has many a times been costly to depositors. The financial Crisis of 2008 and the aftermath in many of the financial markets, more so private banks is a testament to it. Even the documentary Plenary’s Men gives whole lot of insight in the corruption that Private banks do today.

The plenary’s men on Youtube at least to my mind is evidence enough that at least India should be cautious in dealings with private banks.

Co-operative banks and their Rise

The Co-operative banks rise in India was largely in part due to rise of co-operative societies. While the co-operative Societies Act was started in 1904 itself. While there were quite a few co-operative societies and banks, arguably the real filip to Co-operative Banking was done by Amul when it started in 1946 and the milk credit society it started with it. I dunno how many people saw ‘Manthan‘ which chronicled the story and bought the story of both the co-operative societies and co-operative banks to millions of India. It is a classic movie which lot of today’s youth probably doesn’t know and even if he would would take time to identify with, although people of my generation the earlier generations do get it. One of the things that many people don’t get is that for lot of people even today, especially for marginal farmers and such in rural areas, co-operative banks are still the only solution. While in recent times, the Govt. of the day has tried something called Jan Dhan Yojana it hasn’t been as much a success as they were hoping to. While reams of paper have been written about it, like most policies it didn’t deliver to the last person which such inclusion programs try. Issues from design to implementation are many but perhaps some other time. I am sharing about Co-operative banks as a recent scam took place in one of the banks, probably one of the most respected and widely held co-operative banks. I would rather share sucheta dalal’s excellent analysis done on the PMC bank crisis which is 1unfolding and perhaps continue to unfold in days to come.

Conclusion

At the end I have to admit I took a lot of short-cuts to reach till here. There is possibility that there may be details people might want me to incorporate, if so please let me know and would try and add that. I did try to compress as much as possible while trying to be as exhaustive as possible. I also haven’t used any data as I wanted to keep the explanations as simple as possible and try not to have as little of politics as possible even though biases which are there, are there.

Dirk Eddelbuettel: GitHub Streak: Round Six

12 October, 2019 - 22:53

Five ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking

github activity october 2013 to october 2014

And four year ago a first follow-up appeared in this post:

github activity october 2014 to october 2015

And three years ago we had a followup

github activity october 2015 to october 2016

And two years ago we had another one

github activity october 2016 to october 2017

And last year another one

github activity october 2017 to october 2018

As today is October 12, here is the newest one from 2018 to 2019:

github activity october 2018 to october 2019

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Louis-Philippe Véronneau: Alpine MusicSafe Classic Hearing Protection Review

12 October, 2019 - 11:00

Yesterday, I went to a punk rock show and had tons of fun. One of the bands playing (Jeunesse Apatride) hadn't played in 5 years and the crowd was wild. The other bands playing were also great. Here's a few links if you enjoy Oi! and Ska:

Sadly, those kind of concerts are always waaaaayyyyy too loud. I mostly go to small venue concerts and for some reason the sound technicians think it's a good idea to make everyone's ears bleed. You really don't need to amplify the drums when the whole concert venue is 50m²...

So I bough hearing protection. It was the first time I wore earplugs at a concert and it was great! I can't really compare the model I got (Alpine MusicSafe Classic earplugs) to other brands since it's the only one I tried out, but:

  • They were very comfortable. I wore them for about 5 hours and didn't feel any discomfort.

  • They came with two sets of plastic tips you insert in the silicone earbuds. I tried the -17db ones but I decided to go with the -18db inserts as it was still freaking loud.

  • They fitted very well in my ears even tough I was in the roughest mosh pit I've ever experienced (and I've seen quite a few). I was sweating profusely from all the heavy moshing and never once I feared loosing them.

  • My ears weren't ringing when I came back home so I guess they work.

  • The earplugs didn't distort sound, only reduce the volume.

  • They came with a handy aluminium carrying case that's really durable. You can put it on your keychain and carry them around safely.

  • They only cost me ~25 CAD with taxes.

The only thing I disliked was that I found it pretty much impossible to sing while wearing them. as I couldn't really hear myself. With a bit of practice, I was able to sing true but it wasn't great :(

All in all, I'm really happy with my purchase and I don't think I'll ever go to another concert without earplugs.

Molly de Blanc: Conferences

12 October, 2019 - 06:23

I think there are too many conferences.

Are there too many FLOSS conferences?

— Molly dBoo (@mmillions) October 7, 2019

I conducted this very scientific Twitter poll and out of 52 respondants, only 23% agreed with me. Some people who disagreed with me pointed out specifically what they think is lacking:  more regional events, more in specific countries, and more “generic” FLOSS events.

Many projects have a conference, and then there are “generic” conferences, like FOSDEM, LibrePlanet, LinuxConfAU, and FOSSAsia. Some are more corporate (OSCON), while others more community focused (e.g. SeaGL).

There are just a lot of conferences.

I average a conference a month, with most of them being more general sorts of events, and a few being project specific, like DebConf and GUADEC.

So far in 2019, I went to: FOSDEM, CopyLeft Conf, LibrePlanet, FOSS North, Linux Fest Northwest, OSCON, FrOSCon, GUADEC, and GitLab Commit. I’m going to All Things Open next week. In November I have COSCon scheduled. I’m skipping SeaGL this year. I am not planning on attending 36C3 unless my talk is accepted. I canceled my trip to DebConf19. I did not go to Camp this year. I also had a board meeting in NY, an upcoming one in Berlin, and a Debian meeting in the other Cambridge. I’m skipping LAS and likely going to SFSCon for GNOME.

So 9 so far this year,  and somewhere between 1-4 more, depending on some details.

There are also conferences that don’t happen every year, like HOPE and CubaConf. There are some that I haven’t been to yet, like PyCon, and more regional events like Ohio Linux Fest, SCALE, and FOSSCon in Philadelphia.

I think I travel too much, and plenty of people travel more than I do. This is one of the reasons why we have too many events: the same people are traveling so much.

When you’re nose deep in it, when you think that you’re doing is important, you keep going to them as long as you’re invited. I really believe in the messages I share during my talks, and I know by speaking I am reaching audiences I wouldn’t otherwise. As long as I keep getting invited places, I’ll probably keep going.

Finding sponsors is hard(er).

It is becoming increasingly difficult to find sponsors for conferences. This is my experience, and what I’ve heard from speaking with others about it. Lower response rates to requests and people choosing lower sponsorship levels than they have in past years.

CFP responses are not increasing.

I’m yet to hear of any established community-run tech conferences who’ve had growth in their CFP response rate this year.

Peak conference?

— Christopher Neugebauer (@chrisjrn) October 3, 2019

I sort of think the Tweet says it all. Some conferences aren’t having this experiences. Ones I’ve been involved with, or spoken to the organizers of, are needing to extend their deadlines and generally having lower response rates.

Do I think we need fewer conferences?

Yes and no. I think smaller, regional conferences are really important to reaching communities and individuals who don’t have the resources to travel. I think it gives new speakers opportunities to share what they have to say, which is important for the growth and robustness of FOSS.

Project specific conferences are useful for those projects. It gives us a time to have meetings and sprints, to work and plan, and to learn specifically about our project and feel more connected to out collaborators.

On the other hand, I do think we have more conferences than even a global movement can actively support in terms of speakers, organizer energy, and sponsorship dollars.

What do I think we can do?

Not all of these are great ideas, and not all of them would work for every event. However, I think some combination of them might make a difference for the ecosystem of conferences.

More single-track or two-track conferences. All Things Open has 27 sessions occurring concurrently. Twenty-seven! It’s a huge event that caters to many people, but seriously, that’s too much going on at once. More 3-4 track conferences should consider dropping to 1-2 tracks, and conferences with more should consider dropping their numbers down as well. This means fewer speakers at a time.

Stop trying to grow your conference. Growth feels like a sign of success, but it’s not. It’s a sign of getting more people to show up. It helps you make arguments to sponsors, because more attendees means more people being reached when a company sponsors.

Decrease sponsorship levels. I’ve seen conferences increasing their sponsorship levels. I think we should all agree to decrease those numbers. While we’ll all have to try harder to get more sponsors, companies will be able to sponsor more events.

Stop serving meals. I appreciate a free meal. It makes it easier to attend events, but meals are expensive and difficult to logisticate. I know meals make it easier for some people, especially students, to attend. Consider offering special RSVP lunches for students, recent grads, and people looking for work.

Ditch the fancy parties. Okay, I also love a good conference party. They’re loads of fun and can be quite expensive. They also encourage drinking, which I think is bad for the culture.

Ditch the speaker dinners. Okay, I also love a good speaker dinner. It’s fun to relax, see my friends, and have a nice meal that isn’t too loud of overwhelming. These are really expensive. I’ve been trying to donate to local food banks/food insecurity charities an equal amount of the cost of dinner per person, but people are rarely willing to share that information! Paying for a nice dinner out of pocket — with multiple bottles of wine — usually runs $50-80 with tip. I know one dinner I went to was $150 a person. I think the community would be better served if we spent that money on travel grants. If you want to be nice to speakers, I enjoy a box of chocolates I can take home and share with my roommates.

 Give preference to local speakers. One of the things conferences do is bring in speakers from around the world to share their ideas with your community, or with an equally global community. This is cool. By giving preference to local speakers, you’re building expertise in your geography.

Consider combining your community conferences. Rather than having many conferences for smaller communities, consider co-locating conferences and sharing resources (and attendees). This requires even more coordination to organize, but could work out well.

Volunteer for your favorite non-profit or project. A lot of us have booths at conferences, and send people around the world in order to let you know about the work we’re doing. Consider volunteering to staff a booth, so that your favorite non-profits and projects have to send fewer people.

While most of those things are not “have fewer conferences,” I think they would help solve the problems conference saturation is causing: it’s expensive for sponsors, it’s expensive for speakers, it creates a large carbon footprint, and it increases burnout among organizers and speakers.

I must enjoy traveling because I do it so much. I enjoy talking about FOSS, user rights, and digital rights. I like meeting people and sharing with them and learning from them. I think what I have to say is important. At the same time, I think I am part of an unhealthy culture in FOSS, that encourages burnout, excessive travel, and unnecessary spending of money, that could be used for better things.

One last thing you can do, to help me, is submit talks to your local conference(s). This will help with some of these problems as well, can be a great experience, and is good for your conference and your community!

Markus Koschany: My Free Software Activities in September 2019

11 October, 2019 - 03:49

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • Reiner Herrmann investigated a build failure of supertuxkart on several architectures and prepared an update to link against libatomic. I reviewed and sponsored the new revision which allowed supertuxkart 1.0 to migrate to testing.
  • Python 3 ports: Reiner also ported bouncy, a game for small kids, to Python3 which I reviewed and uploaded to unstable.
  • Myself upgraded atomix to version 3.34.0 as requested although it is unlikely that you will find a major difference to the previous version.
Debian Java Misc
  • I packaged new upstream releases of ublock-origin and privacybadger, two popular Firefox/Chromium addons and
  • packaged a new upstream release of wabt, the WebAssembly Binary Toolkit.
Debian LTS

This was my 43. month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 11.09.2019 until 15.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in libonig, bird, curl, openssl, wpa, httpie, asterisk, wireshark and libsixel.
  • DLA-1922-1. Issued a security update for wpa fixing 1 CVE.
  • DLA-1932-1. Issued a security update for openssl fixing 2 CVE.
  • DLA-1900-2. Issued a regression update for apache fixing 1 CVE.
  • DLA-1943-1. Issued a security update for jackson-databind fixing 4 CVE.
  • DLA-1954-1. Issued a security update for lucene-solr fixing 1 CVE. I triaged CVE-2019-12401 and marked Jessie as not-affected because we use the system libraries of woodstox in Debian.
  • DLA-1955-1. Issued a security update for tcpdump fixing 24 CVE by backporting the latest upstream release to Jessie. I discovered several test failures but after more investigation I came to the conclusion that the test cases were simply created with a newer version of libpcap which causes the test failures with Jessie’s older version. DLA-1955-1 will be available shortly.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my sixteenth month and I have been assigned to work 15 hours on ELTS plus five hours from August. I used 15 of them for the following:

  • I was in charge of our ELTS frontdesk from 30.09.2019 until 06.10.2019 and I triaged CVE in tcpdump. There were no reports of other security vulnerabilities for supported packages in this week.
  • ELA-163-1. Issued a security update for curl fixing 1 CVE.
  • ELA-171-1. Issued a security update for openssl fixing 2 CVE.
  • ELA-172-1. Issued a security update for linux fixing 23 CVE.
  • ELA-174-1. Issued a security update for tcpdump fixing 24 CVE. ELA-174-1 will be available shortly.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้