Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 5 min 20 sec ago

Ben Hutchings: Debian LTS work, March 2018

12 April, 2018 - 03:41

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from February. I worked 15 hours and will again carry over 2 hours to April.

I made another two releases on the Linux 3.2 longterm stable branch (3.2.100 and 3.2.101), the latter including mitigations for Spectre on x86. I rebased the Debian package onto 3.2.101 but didn't upload an update to Debian this month. We will need to add gcc-4.9 to wheezy before we can enable all the mitigations for Spectre variant 2.

Joerg Jaspert: Debian SecureBoot Sprint 2018

11 April, 2018 - 22:01

Monday morning I gave back the keys to Office Factory Fulda, who sponsored the location for the SecureBoot Sprint from Thursday, 4th April to Sunday, 8th April. Appearently we left a pretty positive impression (we managed to clean up), so are welcome again for future sprints.

The goal of this sprint was enabling SecureBoot in/for Debian, so that users who have SecureBoot enabled machines do not need to turn that off to be able to run Debian. That needs us to handle signing a certain set of packages in a defined way, handling it as automated as possible while ensuring that stuff is done in a safe/secure way.

Now add details like secure handling of keys, only signing pre-approved sets (to make abusing it harder), revocations, key rollovers, combine it all with the infrastructue and situation we have in Debian, say dak, buildd, security archive with somewhat different rules of visibility, reproducability, a huge set of architectures only some of which do SecureBoot, proper audit logging of signatures and you end up with 7 people from different teams taking the whole first day just discussing and hashing out a specification. Plus some joining in virtually.

I’m not going into actual details of all that, as a sprint report will follow soon.

Friday to Sunday was used for actual implementation of the agreed solution. The actual dak changes turned out to not be too large, and thankfully Ansgar was on them, so I could take time to push the FTPTeams move to the new Salsa service forward. I still have a few of our less-important repositories to move, but thats a simple process I will be doing during this week, the most important step was coming up with a sane way of using Salsa.

That does not mean the actual web interface, but getting code changes from there to the various Debian hosts we run our services on. In the past, we pushed the hosts directly, so all code changes appearing on them meant that someone who was in the right unix group on that machine made them appear.1 “Verified by ssh login” basically.

With Salsa, we now add a service that has a different set of administrators added on top. And a big piece of software too, with a huge possibility of bugs, worst case allowing random users access to our repositories. Which is a way larger problem area than “git push via ssh” as in the past, and as such more likely to be bad. If we blindly pull from a repository on such shared space, the confirmation “a FTPMaster said this code is good” is gone.

So it needs a way of adding that confirmation back, while still being able to use all the nice features that Salsa offers. Within Debian, whats better than using already established ways of trusting something, gnupg created signatures?!

So how to go forward? I have been lucky, i did not need to entirely invent this on my own, Enrico had similar concerns for the New-Maintainer web pages. He setup CI to test his stuff and, if successful, installs the tested stuff on the NM machine, provided that the commit is signed by a key from a defined set.

Unfortunately, for me, he deals with a Django app that listens somewhere and can be pushed to. No such thing for me, neither do I have Django nor do I have a service listening that I can tell about changes to fetch.

We also have to take care when a database schema upgrade needs to be done, no automatic deployment on database-using FTPMaster hosts for that, a human needs to trigger this.

So the actual implementation that I developed for us, and which is in use on all hosts that we maintain code on, is implemented in our standard framework for regular jobs, cronscript.2

It turns out to live in multiple files (as usual with cronscript), where the actual code is in deploy.functions, deploy.variables, and the order to call things is defined in deploy.tasks.

cronscript around it takes care to setup the environment and keep logs, and we now call the deploy every few minutes, securely getting our code deployed.

  1. Or someone abused root rights, but if you do not trust root, you lost anyways, and there is no reason to think that any DSA-member would do this. 

  2. A framework for FTPMaster scripts that ensures the same basic setup everywhere and makes it easy to call functions and stuff, with or without error checking, in background or foreground. ALso easy to restart in the middle of a script run after breakage, as it keeps track where it was. 

Olivier Berger: Preventing resume immediately after suspend on Dell Latitude 5580 (Debian testing)

11 April, 2018 - 18:14

I’ve installed Debian buster (testing at the time of writing) on a new Dell Latitude 5580 laptop, and one annoyance I’ve found is that the laptop would almost always resume as soon as it was suspended.

AFAIU, it seems the culprit is the network card (Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM) which would be configured with Wake-On-Lan (wol) set to the “magic packet” mode (ethtool enp0s31f6 | grep Wake-on would return ‘g’). One hint is that grep enabled /proc/acpi/wakeup returns GLAN.

There are many ways to change that for the rest of the session with a command like ethtool -s enp0s31f6 wol d.

But I had a hard time figuring out if there was a preferred way to make this persistant among the many hits in so many tutorials and forum posts.

My best hit so far is to add the a file named /etc/systemd/network/50-eth0.link containing :

[Match]
 Driver=e1000e

[Link]
 WakeOnLan=off

The driver can be found by checking udev settings as reported by udevadm info -a /sys/class/net/enp0s31f6

There are other ways to do that with systemd, but so far it seems to be working for me. Hth,

Steve Kemp: Bread and data

11 April, 2018 - 12:01

For the past two weeks I've mostly been baking bread. I'm not sure what made me decide to make some the first time, but it actually turned out pretty good so I've been doing every day or two ever since.

This is the first time I've made bread in the past 20 years or so - I recall in the past I got frustrated that it never rose, or didn't turn out well. I can't see that I'm doing anything differently, so I'll just write it off as younger-Steve being daft!

No doubt I'll get bored of the delicious bread in the future, but for the moment I've got a good routine going - juggling going to the shops, child-care, and making bread.

Bread I've made includes the following:

Beyond that I've spent a little while writing a simple utility to embed resources in golang projects, after discovering the tool I'd previously been using, go-bindata, had been abandoned.

In short you feed it a directory of files and it will generate a file static.go with contents like this:

files[ "data/index.html" ] = "<html>....
files[ "data/robots.txt" ] = "User-Agent: * ..."

It's a bit more complex than that, but not much. As expected getting the embedded data at runtime is trivial, and it allows you to distribute a single binary even if you want/need some configuration files, templates, or media to run.

For example in the project I discussed in my previous post there is a HTTP-server which serves a user-interface based upon bootstrap. I want the HTML-files which make up that user-interface to be embedded in the binary, rather than distributing them seperately.

Anyway it's not unique, it was a fun experience writing, and I've switched to using it now:

Gunnar Wolf: DRM, DRM, oh how I hate DRM...

11 April, 2018 - 11:43

I love flexibility. I love when the rules of engagement are not set in stone and allow us to lead a full, happy, simple life. (Apologies to Felipe and Marianne for using their very nice sculpture for this rant. At least I am not desperately carrying a brick! ☺)

I have been very, very happy after I switched to a Thinkpad X230. This is the first computer I have with an option for a cellular modem, so after thinking it a bit, I got myself one:

After waiting for a couple of weeks, it arrived in a nonexciting little envelope straight from Hong Kong. If you look closely, you can even appreciate there's a line (just below the smaller barcode) that reads "Lenovo"). I soon found how to open this laptop (kudos to Lenovo for a very sensible and easy opening process, great documentation... So far, it's the "openest" computer I have had!) and installed my new card!

The process was decently easy, and after patting myself in the back, I eagerly turned on my computer... Only to find the BIOS to halt with the following message:

1802: Unauthorized network card is plugged in - Power off and remove the miniPCI network card (1199/6813).

System is halted

So... Got everything back to its original state. Stupid DRM in what I felt the openest laptop I have ever had. Gah.

Anyway... As you can see, I have a brand new cellular modem. I am willing to give it to the first person that offers me a nice beer in exchange, here in Mexico or wherever you happen to cross my path (just tell me so I bring the little bugger along!)

Of course, I even tried to get one of the nice volunteers to install Libreboot in my computer now that I was to Libreplanet, which would have solved the issue. But they informed me that Libreboot is supported only in the (quite a bit older) X200 machines, not in the X230.

AttachmentSize IMG_20180409_225503.jpg1003.02 KB IMG_20180409_225835.jpg1.77 MB IMG_20180409_230000.jpg113.36 KB IMG_20180409_225835.jpg1.77 MB IMG_20180408_085157.jpg3.44 MB

Reproducible builds folks: Reproducible Builds: Weekly report #154

10 April, 2018 - 15:03

Here's what happened in the Reproducible Builds effort between Sunday April 1 and Saturday April 7 2018:

Patches

In addition, build failure bugs were reported by Adam Borowski (1), Adrian Bunk (27) and Aurélien Courderc (1).

jenkins.debian.net development

Mattia Rizzolo made a large number of changes to our Jenkins-based testing framework, including:

Reviews of unreproducible packages

52 package reviews were added, 43 were updated and 50 have been removed in this week, adding to our knowledge about identified issues.

Two issue categorisation types were added (nondeterminism_in_files_generated_by_hfst & nondeterminism_in_apertium_lrx_bin_files_generated_by_lrx_comp) and two were removed (nondeterminstic_ordering_in_gsettings_glib_enums_xml & captures_build_path_in_python_sugar3_symlinks)

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Markus Koschany: My Free Software Activities in March 2018

10 April, 2018 - 04:58

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games Debian Java
  • I spent most of my free time on Java packages because…OpenJDK 9 is now the default Java runtime environment in Debian! As of today I count 319 RC bugs (bugs with severity normal would be serious today as well) of which 227 are already resolved. That means one third of the Java team’s packages have to be adjusted for the new OpenJDK version. Java 9 comes with a new module system called Jigsaw. Undoubtedly it represents a lot of new interesting ideas but it is also a major paradigm shift. For us mere packagers it means more work than any other version upgrade in the past. Let’s say we are a handful of regular contributors (I’m generous) and we spend most of our time to stabilize the Java ecosystem in Debian to the point that we can build all of our packages again. Repeat for every new Debian release. Unfortunately not much time is actually spent on packaging new and cool applications or libraries unless they are strictly required to fix a specific Java 9 issue. It just doesn’t feel right at the moment. Most upstreams are rather indifferent or relaxed when it comes to porting their applications to Java 9 because they still can use Java 8, so why can’t we? They don’t have to provide security support for five years and can make the switch to Java 9 much later. They can also cherry-pick certain versions of libraries whereas we have to ensure that everything works with one specific version of a library. But that’s not all: Java 9 will not be shipped with Buster and we even aim for OpenJDK 11! Releases of OpenJDK will be more frequent from now on, expect a new release every six months, and there are certain versions which will receive extended security support like OpenJDK 11. One thing we can look forward to: Apparently more commercial features of Oracle JDK will be merged into OpenJDK and it appears the longterm goal is to make Oracle JDK and OpenJDK builds completely interchangeable. So maybe one day only one free software JDK for everything and everyone? I hope so.
  • I worked on the following packages to address Java 9 or other bugs: activemq, snakeyaml, libjchart2d-java, jackson-dataformat-yaml, jboss-threads, jboss-logmanager, jboss-logging-tools, qdox2, wildfly-common, activemq-activeio, jackson-datatype-joda, antlr, axis, libitext5-java, libitext1-java, libitext-java, jedit, conversant-disruptor, beansbinding, cglib, undertow, entagged, jackson-databind, libslf4j-java, proguard, libhtmlparser-java, libjackson-json-java and sweethome3d (patch by Emmanuel Bourg)
  • New upstream versions: jboss-threads, okio, libokhttp-java, snakeyaml, robocode.
  • I NMUed jtb and applied a patch from Tiago Stürmer Daitx.
Debian LTS

This was my twenty-fifth month as a paid contributor and I have been paid to work 23,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 19.03.2018 until 25.03.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in imagemagick, libvirt, freeplane, exempi, calibre, gpac, ipython, binutils, libraw, memcached, mosquitto, sdl-image1.2, slurm-llnl, graphicsmagick, libslf4j-java, radare2, sam2p, net-snmp, apache2, ldap-account-manager, librelp, ruby-rack-protection, libvncserver, zsh and xerces-c.
  • DLA-1310-1. Issued a security update for exempi fixing 6 CVE.
  • DLA-1315-1. Issued a security update for libvirt fixing 2 CVE.
  • DLA-1316-1. Issued a security update for freeplane fixing 1 CVE.
  • DLA-1322-1. Issued a security update for graphicsmagick fixing 6 CVE.
  • DLA-1325-1. Issued a security update for drupal7 fixing 1 CVE.
  • DLA-1326-1. Issued a security update for php5 fixing 1 CVE.
  • DLA-1328-1. Issued a security update for xerces-c fixing 1 CVE.
  • DLA-1335-1. Issued a security update for zsh fixing 2 CVE.
  • DLA-1340-1. Issued a security update for sam2p fixing 5 CVE. I also prepared a security update for Jessie. (#895144)
  • DLA-1341-1. Issued a security update for sdl-image1.2 fixing 6 CVE.
Misc
  • I triaged all open bugs in imlib2 and forwarded the issues upstream. The current developer of imlib2 was very responsive and helpful. Thanks to Kim Woelders several longstanding bugs could be fixed.
  • There was also a new upstream release for xarchiver. Check it out!

Thanks for reading and see you next time.

Lucas Kanashiro: Migrating PET features to distro-tracker

9 April, 2018 - 20:30

After joining the Debian Perl Team some time ago, PET has helped me a lot to find work to do in the team context, and also helped the whole team in our workflow. For those who do not know what PET is: “a collection of scripts that gather information about your (or your group’s) packages. It allows you to see in a bird’s eye view the health of hundreds of packages, instantly realizing where work is needed.”. PET became an important project since about 20 Debian teams were using it, including Perl and Ruby teams in which I am more active.

In Cape Town, during the DebConf16, I had a conversation with Raphael Hertzog about the possibility to migrate PET features to distro-tracker. He is one of the distro-tracker maintainers, and we found some similarities between those tools. Altough, after that I did not have enough time to push it forward. However, after the migration from Alioth to Salsa PET became almost unuseful because a lot of things were done based on Alioth. This brought me the motivation to get this migration idea off the drawing board, and support the PET features in distro-tracker team’s visualization.

In the meantime, the Debian Outreach team published a GSoC call for mentors for this year. I was a Debian GSoC student in 2014 and 2015, and this was a great opportunity for me to join the community. With that in mind and the wish to give this opportunity to others, I decided to become a mentor this year and proposed a project to implement the PET features in distro-tracker, called Improving distro-tracker to better support Debian Teams. We are at the selection students phase and I received great proposals. I am looking forward to the start of the program and finally have the PET features available in tracker.debian.org. And of course, bring new blood to the Debian Project, since this is the idea behind those outreach programs.

Michal &#268;iha&#345;: New projects on Hosted Weblate

9 April, 2018 - 17:00

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long and waited for more than month, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Lars Wirzenius: Architecture aromas

9 April, 2018 - 13:13

"Code smells" are a well-known concept: they're things that make you worried your code is not of good quality, without necessarily being horribly bad in and of themsleves. Like a bad smell in a kitchen that looks clean.

I've lately been thinking of "architecture aromas", which indicate there might be something good about an architecture, but don't guarantee it.

An example: you can't think of any compontent that could be removed from an architecture without sacrificing main functionality.

Antoine Beaupré: Death of a Thinkpad x120e laptop

9 April, 2018 - 11:01

My laptop named "angela" is (was?) a Thinkpad x120e (ThinkWiki). It's a netbook model (although they branded it a Ultraportable), which meant back then that it was a small, wide, slim laptop with less power, but cheaper. It did its job: I carried it through meetings and conferences all over the world for 7 years now. I also used it as a workstation for a short time in 2016-2017 when marcos stopped being a workstation and turned solely into a home cinema.

I always disliked the keyboard. I got used to the "chiclet" key style, but never to the missing top block of keys: I just use "scroll lock", "print screen" and "pause" too much. I also found the CPU to be much slower than my previous workstation (marcos), which meant it was a pain to go back to it. Memory was also a limitation: I could apparently bump the memory to 8GB, but the cost is high (80$) and the configuration is not officially supported.

I also struggled with the wifi card that works through binary blobs and it's not possible to replace it because the BIOS blocks "unauthorized" cards from being installed, an absolutely ludicrous idea.

As a comparison, the Thinkpad x201, released a year earlier, fully supports 8GB of RAM and has a more powerful i5 processor. It can also run Coreboot, although that is less supported than other Thinkpad models. A generous friend was nice enough to give me his spare x201 which, even if it's incredibly worn out, already feels more solid, reliable and fast than my shiny x120e. And the x201 has broken keys, torn bezel and the hard drive cover is held together with duct tape. I love it.

How the x120e died

In the end, this laptop died a horrible death: it crashed, face first, on a linoleum floor. This seems to have cracked something in the screen which makes the text readable, but barely, and colors, totally off. Here is a Github webpage that is supposed to be white, but shows up as cyan:

This phenomenon progressively damages the display until it displays nothing but a blank white screen. I have heard it might be some gaz that leaks from the display into the LCD, but that screen is supposed to be lit by a LED array (as opposed to CFFL, see the backlight article for more explanations) so that's probably not the problem. So I don't quite know what's going on with that screen, but it's obviously dead, which is somewhat inconvenient for a laptop, to say the least.

It would probably be possible to replace that screen (40-60$USD for parts), however there is another issue: the CPU/fan assembly also has a serious cooling problem. When the machine boots, the fan kicks in full speed immediately. Just idle, the CPU hits 62°C. Doing a git annex fsck on a bunch of files (which involves many SHA256 checksums) made the CPU heat up to 99°C. Playing videos on Netflix completely crashes the machine with temperature warnings, as it struggles to decode videos in the browser. This might be fixable as well, but it means lot of work to do on a machine I didn't like very much in the first place. In the meantime, it means the CPU can basically boil an egg which can't be good for the hardware.

So basically, to get this machine up to speed, I would need to:

  1. replace the screen (60-80$ USD + time)
  2. replace the CPU/fan assembly (20$ USD + time + may not work at all)
  3. buy new RAM sticks (80$ USD + may not work at all)

... and I would still be stuck with that old CPU. Comparing this with a brand new Chromebook at around 300$ USD, or a used Thinkpad x230 for 300-400CAD which takes up to 16GB of RAM makes it difficult to justify the time and expense, although there's always the question of electronic waste reduction.

For now, angela served me well in the last seven years. May it rest in peace or be sold for parts. Of course, the soul of the laptop is still alive and moves on to its new home: it was easy to take the hard drive and connect it in the x201. Hopefully this will keep me traveling for a little while longer while I look for a replacement laptop in the future. I obviously welcome suggestions for a replacement machine in the future, although keep in mind I did my research in the aforementioned page.

Joey Hess: AIMS inverter control via GPIO ports

9 April, 2018 - 06:35

I recently upgraded my inverter to a AIMS 1500 watt pure sine inverter (PWRI150024S). This is a decent inverter for the price, I hope. It seems reasonably efficient under load compared to other inverters. But when it's fully idle, it still consumes 4 watts of power.

That's almost as much power as my laptop, and while 96 watt-hours per day may not sound like a lot of power, some days in winter, 100 watt-hours is my entire budget for the day. Adding more batteries just to power an idle inverter would be the normal solution, probably. Instead, I want to have my house computer turn it off when it's not being used.

Which comes to the other problem with this inverter, since the power control is not a throw switch, but a button you have to press and hold for a second. And looking inside the inverter, this was not easily hacked to add a relay to control it.

The inverter has a RJ22 control port. AIMS also does not seem to document what the pins do, so I reverse engineered them.

Since the power is toggled, it's important that the computer be able to check if the inverter is currently running, to reliably get to the desired on/off state.

I designed (well, mostly cargo-culted) a circuit that uses 4n35 optoisolators to safely interface the AIMS with my cubietruck's GPIO ports, letting it turn the inverter on and off, and also check if it's currently running. Built this board, which is the first PCB I've designed and built myself.

The full schematic and haskell code to control the inverter are in the git repository https://git.joeyh.name/index.cgi/joey/homepower.git/tree/. My design notebook for this build is available in secure scuttlebutt along with power consumption measurements.

It works!

joey@darkstar:~>ssh house inverter status
off
joey@darkstar:~>ssh house inverter on
joey@darkstar:~>ssh house inverter status
on

Dirk Eddelbuettel: tint 0.1.0

8 April, 2018 - 21:57

A new release of the tint package just arrived on CRAN. Its name expands from tint is not tufte as the package offers a fresher take on the Tufte-style for html and pdf presentations.

This version adds support for the tufte-book latex style. The package now supported handouts in html or pdf format (as before) but also book-length material. I am using this myself in a current draft and this is fully working, though (as always) subject to changes.

A screenshot of a chapter opening and a following page is below:

One can deploy the template for book-style documents from either rmarkdown (easy) or bookdown (so far manual setup only). I am using the latter but the difference does not really matter for as long as you render the whole document at once; the only change from bookdown, really, is that your source directory ends up containing more files giving you more clutter and more degrees of freedom to wonder what gets set where.

The full list of changes is below.

Changes in tint version 0.1.0 (2018-04-08)
  • A new template 'tintBook' was added.

  • The pdf variants now default to 'tango' as the highlighting style.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Niels Thykier: Build system changes in debhelper

8 April, 2018 - 16:55

Since debhelper/11.2.1[1], we now support using cmake for configure and ninja for build + test as an alternative to cmake for configure and make for build + test.  This change was proposed by Kyle Edwards in Debian bug #895044. You can try this new combination by specifying “cmake+ninja” as build system.

To facilitate this change, the cmake and meson debhelper buildsystems had to change their (canonical) name.  As such you may have noticed that the “–list” option for dh_auto_* now produces a slightly different output:

 

$ dh_auto_configure --list | grep +
cmake+makefile       CMake (CMakeLists.txt) combined with simple Makefile
cmake+ninja          CMake (CMakeLists.txt) combined with Ninja (build.ninja)
meson+ninja          Meson (meson.build) combined with Ninja (build.ninja)

 

You might also notice that “cmake” and “meson” is no longer listed in the full list of build systems.  To retain backwards compatibility, the names “cmake” and “meson” are handled as “cmake+makefile” and “meson+ninja”.  This can be seen if we specify a build system:

 

$ dh_auto_configure --buildsystem cmake --list | tail -n1
Specified: cmake+makefile (default for cmake)
$ dh_auto_configure --buildsystem cmake+makefile --list | tail -n1
Specified: cmake+makefile
$ dh_auto_configure --buildsystem cmake+ninja --list | tail -n1
Specified: cmake+ninja

 

If your package uses cmake, please give it a try and see what works and what does not.  So far, the only known issue is that cmake+ninja may fail if the package has no tests while it success with cmake+makefile.  I believe this is because the makefile build system checks whether the “test” or “check” targets exist before calling make.

Enjoy.

 

Footnotes:

[1] Strictly speaking, it was version 11.2.  However, version 11.2 had a severe regression that made it mostly useless.

Lars Wirzenius: Storing passwords in cleartext: don't ever

7 April, 2018 - 14:48

This year I've implemented a rudimentary authentication server for work, called Qvisqve. I am in the process for also using it for my current hobby project, ick, which provides HTTP APIs and needs authentication. Qvisqve stores passwords using scrypt: source. It's not been audited, and I'm not claiming it to be perfect, but it's at least not storing passwords in cleartext. (If you find a problem, do email me and tell me: liw@liw.fi.)

This week, two news stories have reached me about service providers storing passwords in cleartext. One is a Finnish system for people starting a new business. The password database has leaked, with about 130,000 cleartext passwords. The other is about T-mobile in Austria bragging on Twitter that they store customer passwords in cleartext, and some people not liking that.

In both cases, representatives of the company claim it's OK, because they have "good security". I disagree. Storing passwords is itself shockingly bad security, regardless of how good your other security measures are, and whether your password database leaks or not. Claiming it's ever OK to store user passwords in cleartext in a service is incompetence at best.

When you have large numbers of users, storing passwords in cleartext becomes more than just a small "oops". It becomes a security risk for all your users. It becomes gross incompetence.

A bank is required to keep their customers' money secure. They're not allowed to store their customers cash in a suitcase on the pavement without anyone guarding it. Even with a guard, it'd be negligent, incompetent, to do that. The bulk of the money gets stored in a vault, with alarms, and guards, and the bank spends much effort on making sure the money is safe. Everyone understands this.

Similar requirements should be placed on those storing passwords, or other such security-sensitive information of their users.

Storing passwords in cleartext, when you have large numbers of users, should be criminal negligence, and should carry legally mandated sanctions. This should happen when the situation is realised, even if the passwords haven't leaked.

Louis-Philippe Véronneau: Debian & Stuff -- Montreal Debian Meeting

7 April, 2018 - 11:00

Today we had a meeting of the local Montreal Debian group. The last meetings we had were centered on working on finishing the DebConf17 final report and some people told us they didn't feel welcome because they weren't part of the organisation of the conference.

I thus decided to call today's event "Debian & Stuff" and invite people to come hack with us on diverse Debian related projects. Most of the people who came were part of the DC17 local team, but a few other people came anyway and we all had a great time. Someone even came from Ottawa to learn how to compile the Linux kernel!

We held the meeting at the Foulab hackerspace, one of the first hackerspaces in Canada. It's in the western part of Montreal, so I had never been there before, but I had heard great things about it.

The space is so cool we had a lot of trouble staying concentrated on the projects we were working on: they brew beer locally, they have a bunch of incredible hardware projects, they run a computer museum, their door is opened remotely using SSH keys ... I could go on and on. If I had more time, I would definitely become a member.

We managed to do a lot of work on the final report and if everything goes well, we should be able to finish the report during our next meeting.

Steinar H. Gunderson: kTLS in Cubemap

7 April, 2018 - 03:03

Cubemap, my video reflector, is getting TLS support. This isn't really because I think Cubemap video is very privacy-sensitive (although I suppose it does protect it against any meddling ISP middleboxes that would want to transcode the video), but putting non-TLS video on TLS pages is getting increasingly frowned upon by browsers—it used to provoke mixed content warnings, but now, it's usually just blocked outright.

This took longer than one would expect, since Cubemap prides itself on extremely high performance. (Even when it was written, five years ago, it could sustain multiple 10gig links on a single, old quadcore.) Cubemap is different from regular HTTP servers in that it doesn't really care about small requests; it doesn't do HLS or MPEG-DASH (although HLS support is also on its way!), just a single very long stream of video, so startup time doesn't matter at all. To that extent, it uses sendfile() (from a buffer file, usually on tmpfs or similar), which wasn't compatible with TLS… until now.

Linux >= 4.13 supports kTLS, where the kernel does the encryption and framing needed (after userspace has done the handshake and handed over the keys). This allows us to keep using sendfile(), and also benefit from the kernel's generally more efficient handling of segmented buffers, reducing the number of copies. Also, of course, the kernel would be able to use any encryption offloads efficiently, although I don't think it's actually doing so for kTLS yet.

The other problem is that Cubemap is designed to have extremely long-lived connections. Since it doesn't do segmented video (which is typically rather high-latency, and also tends to demand more of the TCP congestion control algorithms), clients can be connected for hours at a time, which makes restarts for upgrades trickier. Cubemap solves this by serializing all its state to a file, then exec()-ing the new binary and reloading the state, meaning no connections need to be broken; it stops serving video for mere milliseconds, and clients won't notice a thing. (It also deals with configuration changes this way, since restart is a stricly more powerful concept than configuration reload.) However, most TLS libraries, or generally libraries in general, don't support serializing state.

Salvation comes in the form of TLSe, a small TLS library that supported exactly such serialization (originally as a security feature to be able to separate the private keys out into another process, I believe). Eduard Suica, the TLSe author, was able to add kTLS support in almost no time at all, and after some bugfixing, it appears quite stable. I figured out fairly late that it can't serialize during the key exchange, though (so in that very narrow window, a restart might mean killing the connection), and after it, most of the state resides in the kTLS socket anyway (rendering the feature less important), but OK; I don't regret the choice of library just yet. :-)

With not too much code at all, this is the end result (no A+ due to lack of HSTS support):

kTLS currently supports TLS 1.2 and AES-128-GCM only, but OK, that's still widely supported:

It's hard for me to assess peak performance, but I ran some simple tests with ab, and before it ran out of file descriptors (which I couldn't easily change on the test machine), it reached 4.5 Gbit/sec with a completely untuned kernel… on 60% of a single 2.3 GHz Haswell core. I think I can safely say that any reasonable modern machine will be able to saturate 10gig with TLS.

I hope we'll see TLS 1.3 eventually in TLSe and kTLS, since I'd love to have Curve25519 support for cheaper initial connection setup. But again, what matters is the steady-state condition, and for almost all streams, your viewer interest or server bandwidth is bound to be the limiting factor way before your CPU is. And now, that holds true even with TLS. :-)

Michal &#268;iha&#345;: Weblate 2.20

5 April, 2018 - 21:45

Weblate 2.20 has been released today. There are several performance improvements, new features and bug fixes.

Full list of changes:

  • Improved speed of cloning subversion repositories.
  • Changed repository locking to use third party library.
  • Added support for downloading only strings needing action.
  • Added support for searching in several languages at once.
  • New addon to configure Gettext output wrapping.
  • New addon to configure JSON formatting.
  • Added support for authentication in API using RFC 6750 compatible Bearer authentication.
  • Added support for automatic translation using machine translation services.
  • Added support for HTML markup in whiteboard messages.
  • Added support for mass changing state of strings.
  • Translate-toolkit at least 2.3.0 is now required, older versions are no longer supported.
  • Added built in translation memory.
  • Added componentlists overview to dashboard and per component list overview pages.
  • Added support for DeepL machine translation service.
  • Machine translation results are now cached inside Weblate.
  • Added support for reordering commited changes.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Abhijith PA: FOSSASIA experience

5 April, 2018 - 13:54

Hello Everyone !

I was able to attend this years FOSSASIA summit held at Lifelong Learning Institute, Singapore. Its a quite decent sized, 4 day long conference filled with lot of parallel tracks that cover vast areas from fun tinkering things to huge block chain and data mining topics (which I consider as big topic). Way too much parallel tracks that I decided to attend less talk and meet more people around the venue. The atmosphere was very hacker friendly.

I spend most of my time at the Debian booth. People swing by the booth and they talked about their experience with Debian. It was fun to meet them all. Prior to the conference I created a wiki page to coordinate Debian booth at exhibition which really helped.
I met three Debian Developers - Chow Loong Jin (hyperair), Andrew Lee 李健秋 (ajqlee) and Héctor Orón Martínez (zumbi). Andrew Lee and zumbi also volunteered at Debian booth from time to time along with Balasankar ‘balu’ C (balasankarc). Hyperair was sitting at HackerspaceSG booth, just two booth across from us.

All in all it was an amazing conference. I want to reach out to the organizers and thank them for FOSSASIA.

Singapore

Singapore is a beautiful city with lots of tourists and tourist attractions. All places are well connected with public transport system. People can reach every corner of Singapore with Metro trains and buses. you can find huge variety of food in Singapore. Stalls serving light meals and restaurants are everywhere. On top of that you can find stores like 7-eleven where you can get instant noodles and similar stuffs. Anish (a Debian contributor, also friend of me and balu from kerala who now lives in Singapore) taught me how to use chopsticks :). I also brought home a pair of chopsticks as souvenir that came with my lunch.

Matthew Garrett: Linux kernel lockdown and UEFI Secure Boot

5 April, 2018 - 08:07
David Howells recently published the latest version of his kernel lockdown patchset. This is intended to strengthen the boundary between root and the kernel by imposing additional restrictions that prevent root from modifying the kernel at runtime. It's not the first feature of this sort - /dev/mem no longer allows you to overwrite arbitrary kernel memory, and you can configure the kernel so only signed modules can be loaded. But the present state of things is that these security features can be easily circumvented (by using kexec to modify the kernel security policy, for instance).

Why do you want lockdown? If you've got a setup where you know that your system is booting a trustworthy kernel (you're running a system that does cryptographic verification of its boot chain, or you built and installed the kernel yourself, for instance) then you can trust the kernel to keep secrets safe from even root. But if root is able to modify the running kernel, that guarantee goes away. As a result, it makes sense to extend the security policy from the boot environment up to the running kernel - it's really just an extension of configuring the kernel to require signed modules.

The patchset itself isn't hugely conceptually controversial, although there's disagreement over the precise form of certain restrictions. But one patch has, because it associates whether or not lockdown is enabled with whether or not UEFI Secure Boot is enabled. There's some backstory that's important here.

Most kernel features get turned on or off by either build-time configuration or by passing arguments to the kernel at boot time. There's two ways that this patchset allows a bootloader to tell the kernel to enable lockdown mode - it can either pass the lockdown argument on the kernel command line, or it can set the secure_boot flag in the bootparams structure that's passed to the kernel. If you're running in an environment where you're able to verify the kernel before booting it (either through cryptographic validation of the kernel, or knowing that there's a secret tied to the TPM that will prevent the system booting if the kernel's been tampered with), you can turn on lockdown.

There's a catch on UEFI systems, though - you can build the kernel so that it looks like an EFI executable, and then run it directly from the firmware. The firmware doesn't know about Linux, so can't populate the bootparam structure, and there's no mechanism to enforce command lines so we can't rely on that either. The controversial patch simply adds a kernel configuration option that automatically enables lockdown when UEFI secure boot is enabled and otherwise leaves it up to the user to choose whether or not to turn it on.

Why do we want lockdown enabled when booting via UEFI secure boot? UEFI secure boot is designed to prevent the booting of any bootloaders that the owner of the system doesn't consider trustworthy[1]. But a bootloader is only software - the only thing that distinguishes it from, say, Firefox is that Firefox is running in user mode and has no direct access to the hardware. The kernel does have direct access to the hardware, and so there's no meaningful distinction between what grub can do and what the kernel can do. If you can run arbitrary code in the kernel then you can use the kernel to boot anything you want, which defeats the point of UEFI Secure Boot. Linux distributions don't want their kernels to be used to be used as part of an attack chain against other distributions or operating systems, so they enable lockdown (or equivalent functionality) for kernels booted this way.

So why not enable it everywhere? There's a couple of reasons. The first is that some of the features may break things people need - for instance, some strange embedded apps communicate with PCI devices by mmap()ing resources directly from sysfs[2]. This is blocked by lockdown, which would break them. Distributions would then have to ship an additional kernel that had lockdown disabled (it's not possible to just have a command line argument that disables it, because an attacker could simply pass that), and users would have to disable secure boot to boot that anyway. It's easier to just tie the two together.

The second is that it presents a promise of security that isn't really there if your system didn't verify the kernel. If an attacker can replace your bootloader or kernel then the ability to modify your kernel at runtime is less interesting - they can just wait for the next reboot. Appearing to give users safety assurances that are much less strong than they seem to be isn't good for keeping users safe.

So, what about people whose work is impacted by lockdown? Right now there's two ways to get stuff blocked by lockdown unblocked: either disable secure boot[3] (which will disable it until you enable secure boot again) or press alt-sysrq-x (which will disable it until the next boot). Discussion has suggested that having an additional secure variable that disables lockdown without disabling secure boot validation might be helpful, and it's not difficult to implement that so it'll probably happen.

Overall: the patchset isn't controversial, just the way it's integrated with UEFI secure boot. The reason it's integrated with UEFI secure boot is because that's the policy most distributions want, since the alternative is to enable it everywhere even when it doesn't provide real benefits but does provide additional support overhead. You can use it even if you're not using UEFI secure boot. We should have just called it securelevel.

[1] Of course, if the owner of a system isn't allowed to make that determination themselves, the same technology is restricting the freedom of the user. This is abhorrent, and sadly it's the default situation in many devices outside the PC ecosystem - most of them not using UEFI. But almost any security solution that aims to prevent malicious software from running can also be used to prevent any software from running, and the problem here is the people unwilling to provide that policy to users rather than the security features.
[2] This is how X.org used to work until the advent of kernel modesetting
[3] If your vendor doesn't provide a firmware option for this, run sudo mokutil --disable-validation

comments

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้