Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 22 min 12 sec ago

John Goerzen: Treasuring Moments

1 July, 2019 - 21:35

“Treasure the moments you have. Savor them for as long as you can, for they will never come back again.”

– J. Michael Straczynski

This quote sits on a post-it note on my desk. Here are some moments of our fast-changing little girl that I’m remembering today — she’s almost 2!

Brothers & Sister

Martha loves to play with her siblings. She has names for them — Jacob is “beedoh” and Oliver is “ah-wah”. When she sees them come home, she gets a huge smile and will screech with excitement. Then she will ask them to play with her.

She loves to go down the slide with Jacob. “Beedoh sigh?” (Jacob slide) — that’s her request. He helps her up, then they go down together. She likes to swing side-by-side with Oliver. “Ahwah sing” (Oliver swing) when she wants him to get on the swing next to her. The boys enjoy teaching her new words and games.

[Video: Martha and Jacob on the slide]

Music

Martha loves music! To her, “sing” is a generic word for music. If we’re near a blue speaker, she’ll say “boo sing” (blue sing) and ask for it to play music.

But her favorite request is “daddy sing.” It doesn’t mean she wants me to sing. No, she wants me to play my xaphoon (a sax-like instrument). She’ll start jumping, clapping, and bopping her head to the music. Her favorite spot to do this is a set of soft climbing steps by the piano.

But that’s not enough — next she pulls out our hymnbooks and music books and pretends to sing along. “Wawawawawawa the end!”

If I decide to stop playing, that is most definitely not allowed. “Daddy sing!” And if I don’t comply, she gets louder and more insistent: “DADDY SING.”

[Videos: Martha singing and reading from hymn books, singing her ABCs]

Airplanes

Martha loves airplanes. She started to be able to say “airplane” — first “peen”, then “airpeen”, and now “airpane!” When we’re outside and she hears any kind of buzzing that might possibly be a plane, I’m supposed to instantly pick her up and carry her past our trees so we can look for it. “AIRPANE! AIRPANE! Ho me?” (hold me) Then when we actually see a plane, it’s “Airpane! Hi airpane!” And as it flies off, “Bye-bye airpane. Bye-bye. [sadly] Airpane all done.”

One day, Martha was trying to see airplanes, but it was cloudy. I bundled her up and we went to our local GA airport and stood in the grass watching planes. Now that was a hit! Now anytime Martha sees warehouse-type buildings, she thinks they are hangars, and begs to go to the airport. She loves to touch the airplane, climb inside it, and look at the airport beacon — even if we won’t be flying that day.

[Video: Hi big plane!]

Martha getting ready for a flight

This year, for Mother’s Day, we were going to fly to a nearby airport with a restaurant on the field. I took a photo of our family by the plane before we left. All were excited!

Mother’s Day photo Mornings

We generally don’t let Martha watch TV, but make a few exceptions for watching a few videos and looking at family pictures. Awhile back, Martha made asked to play with me while I was getting ready for the day. “Martha, I have to get dressed first. Then I’ll play with you.” “OK,” she said.

She ran off into the closet, and came back with what she could reach of my clothing – a dirty shirt, and handed it up to me to wear. I now make sure to give her the chance to bring me socks, shirts, etc. And especially shoes. She really likes to bring me shoes.

Then we go downstairs. Sometimes she sits on my lap in the office and we watch Youtube videos of owls or fish. Or sometimes we go downstairs and start watching One Six Right, a wonderful aviation documentary. She and I jabber about what we see — she can identify the beacon (“bee”), big hangar door (“bih doh”), airplanes of different colors (“yellow one”), etc. She loves to see a little Piper Cub fly over some cows, and her favorite shot is a plane that flies behind the control tower at sunset. She’ll lean over and look for it as if it’s going around a corner.

Sometimes we look at family pictures and videos. Her favorite is a video of herself in a plane, jabbering and smiling. She’ll ask to watch it again and again.

Bedtime

Part of our bedtime routine is that I read a story to Martha. For a long time, I read her The Very Hungry Caterpillar by Eric Carle. She loved that book, and one night said “geecko” for pickle. She noticed I clapped for it, and so after that she always got excited for the geeckos and would clap for them.

Lately, though, she wants the “airpane book” – Clair Bear’s First Solo. We read through that book, she looks at the airplanes that fly, and always has an eye out for the “yellow one” and “boo one” (blue plane). At the end, she requests “more pane? More pane?”

After that, I wave goodnight to her. She used to wave back, but now she says “Goodnight, daddy!” and heads on up the stairs.

Jonas Meurer: debian lts report 2019.06

1 July, 2019 - 19:59
Debian LTS report for June 2019

This month I was allocated 17 hours. I also had 1.75 hours left over from May, which makes a total of 18.75 hours. I spent 16.75h of them on the following issues, which means I again carry over 2h to the next month.

Links

Julien Danjou: Handling multipart/form-data natively in Python

1 July, 2019 - 16:21

RFC7578 (who obsoletes RFC2388) defines the multipart/form-data type that is usually transported over HTTP when users submit forms on your Web page. Nowadays, it tends to be replaced by JSON encoded payloads; nevertheless, it is still widely used.

While you could decode an HTTP body request made with JSON natively with Python — thanks to the json module — there is no such way to do that with multipart/form-data. That's something barely understandable considering how old the format is.

There is a wide variety of way available to encode and decode this format. Libraries such as requests support this natively without making you notice, and the same goes for the majority of Web server frameworks such as Django or Flask.

However, in certain circumstances, you might be on your own to encode or decode this format, and it might not be an option to pull (significant) dependencies.

Encoding

The multipart/form-data format is quite simple to understand and can be summarised as an easy way to encode a list of keys and values, i.e., a portable way of serializing a dictionary.

There's nothing in Python to generate such an encoding. The format is quite simple and consists of the key and value surrounded by a random boundary delimiter. This delimiter must be passed as part of the Content-Type, so that the decoder can decode the form data.

There's a simple implementation in urllib3 that does the job. It's possible to summarize it in this simple implementation:

import binascii
import os


def encode_multipart_formdata(fields):
    boundary = binascii.hexlify(os.urandom(16)).decode('ascii')

    body = (
        "".join("--%s\r\n"
                "Content-Disposition: form-data; name=\"%s\"\r\n"
                "\r\n"
                "%s\r\n" % (boundary, field, value)
                for field, value in fields.items()) +
        "--%s--\r\n" % boundary
    )

    content_type = "multipart/form-data; boundary=%s" % boundary

    return body, content_type

You can use by passing a dictionary where keys and values are bytes. For example:

encode_multipart_formdata({"foo": "bar", "name": "jd"})

Which returns:

--00252461d3ab8ff5c25834e0bffd6f70
Content-Disposition: form-data; name="foo"

bar
--00252461d3ab8ff5c25834e0bffd6f70
Content-Disposition: form-data; name="name"

jd
--00252461d3ab8ff5c25834e0bffd6f70--
multipart/form-data; boundary=00252461d3ab8ff5c25834e0bffd6f70

You can use the returned content type in your HTTP reply header Content-Type. Note that this format is used for forms: it can also be used by emails.

Emails did you say?

Encoding with email

Right, emails are usually encoded using MIME, which is defined by yet another RFC, RFC2046. It turns out that multipart/form-data is just a particular MIME format, and that if you have code that implements MIME handling, it's easy to use it to implement this format.

Fortunately for us, Python standard library comes with a module that handles exactly that: email.mime. I told you it was heavily used by email — I guess that's why they put that code in the email subpackage.

Here's a piece of code that handles multipart/form-data in a few lines of code:

from email import message
from email.mime import multipart
from email.mime import nonmultipart
from email.mime import text


class MIMEFormdata(nonmultipart.MIMENonMultipart):
    def __init__(self, keyname, *args, **kwargs):
        super(MIMEFormdata, self).__init__(*args, **kwargs)
        self.add_header(
            "Content-Disposition", "form-data; name=\"%s\"" % keyname)


def encode_multipart_formdata(fields):
    m = multipart.MIMEMultipart("form-data")

    for field, value in fields.items():
        data = MIMEFormdata(field, "text", "plain")
        data.set_payload(value)
        m.attach(data)

    return m

Using this piece of code returns the following:

Content-Type: multipart/form-data; boundary="===============1107021068307284864=="
MIME-Version: 1.0

--===============1107021068307284864==
Content-Type: text/plain
MIME-Version: 1.0
Content-Disposition: form-data; name="foo"

bar
--===============1107021068307284864==
Content-Type: text/plain
MIME-Version: 1.0
Content-Disposition: form-data; name="name"

jd
--===============1107021068307284864==--

This method has several advantages over our first implementation:

  • It handles Content-Type for each of the added MIME parts. We could add other data types than just text/plain like it is implicitly done in the first version. We could also specify the charset (encoding) of the textual data.
  • It's very likely more robust by leveraging the wildly tested Python standard library.

The main downside, in that case, is that the Content-Type header is included with the content. In case of handling HTTP, it is problematic as this needs to be sent as part of the HTTP header and not as part of the payload.

It should be possible to build a particular generator from email.generator that does this. I'll leave that as an exercise to you, reader.

Decoding

We must be able to use that same email package to decode our encoded data, right? It turns out that's the case, with a piece of code that looks like this:

import email.parser


msg = email.parser.BytesParser().parsebytes(my_multipart_data)

print({
    part.get_param('name', header='content-disposition'): part.get_payload(decode=True)
    for part in msg.get_payload()
})

With the example data above, this returns:

{'foo': b'bar', 'name': b'jd'}

Amazing, right?

The moral of this story is that you should never underestimate the power of the standard library. While it's easy to add a single line in your list of dependencies, it's not always required if you dig a bit into what Python provides for you!

Mike Hommey: Announcing git-cinnabar 0.5.2

1 July, 2019 - 12:17

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.1?
  • Updated git to 2.22.0 for the helper.
  • cinnabarclone support is now enabled by default. See details in README.md and mercurial/cinnabarclone.py.
  • cinnabarclone now supports grafted repositories.
  • git cinnabar fsck now does incremental checks against last known good state.
  • Avoid git cinnabar sometimes thinking the helper is not up-to-date when it is.
  • Removing bookmarks on a Mercurial server is now working properly.

Paul Wise: FLOSS Activities June 2019

1 July, 2019 - 09:45
Changes Issues Review Administration
  • Debian: investigate/fix gitlab issue, break LDAP sync lock
  • Debian wiki: fix bugs in anti-spam script, whitelist email domains, whitelist email addresses, update email for accounts with bouncing email
Communication Sponsors

All work was done on a volunteer basis.

Sylvain Beucler: Debian LTS - June 2019

1 July, 2019 - 03:16

Here is my transparent report for my work on the Debian Long Term Support (LTS) project, which extends the security support for past Debian releases, as a paid contributor.

In June, the monthly sponsored hours were split evenly among contributors depending on their max availability - I declared max 30h and got 17h.

I mostly spent time on tricky updates. Uploading one with literally thousands of reverse dependencies can be quite a challenge. Especially when, as is sadly common, the CVE description is (willingly?) vague, and no reproducer is available.

Matthew Garrett: Which smart bulbs should you buy (from a security perspective)

1 July, 2019 - 03:10
People keep asking me which smart bulbs they should buy. It's a great question! As someone who has, for some reason, ended up spending a bunch of time reverse engineering various types of lightbulb, I'm probably a reasonable person to ask. So. There are four primary communications mechanisms for bulbs: wifi, bluetooth, zigbee and zwave. There's basically zero compelling reasons to care about zwave, so I'm not going to.

Wifi
Advantages: Doesn't need an additional hub - you can just put the bulbs wherever. The bulbs can connect out to a cloud service, so you can control them even if you're not on the same network.
Disadvantages: Only works if you have wifi coverage, each bulb has to have wifi hardware and be configured appropriately.
Which should you get: If you search Amazon for "wifi bulb" you'll get a whole bunch of cheap bulbs. Don't buy any of them. They're mostly based on a custom protocol from Zengge and they're shit. Colour reproduction is bad, there's no good way to use the colour LEDs and the white LEDs simultaneously, and if you use any of the vendor apps they'll proxy your device control through a remote server with terrible authentication mechanisms. Just don't. The ones that aren't Zengge are generally based on the Tuya platform, whose security model is to have keys embedded in some incredibly obfuscated code and hope that nobody can find them. TP-Link make some reasonably competent bulbs but also use a weird custom protocol with hand-rolled security. Eufy are fine but again there's weird custom security. Lifx are the best bulbs, but have zero security on the local network - anyone on your wifi can control the bulbs. If that's something you care about then they're a bad choice, but also if that's something you care about maybe just don't let people you don't trust use your wifi.
Conclusion: If you have to use wifi, go with lifx. Their security is not meaningfully worse than anything else on the market (and they're better than many), and they're better bulbs. But you probably shouldn't go with wifi.

Bluetooth
Advantages: Doesn't need an additional hub. Doesn't need wifi coverage. Doesn't connect to the internet, so remote attack is unlikely.
Disadvantages: Only one control device at a time can connect to a bulb, so harder to share. Control device needs to be in Bluetooth range of the bulb. Doesn't connect to the internet, so you can't control your bulbs remotely.
Which should you get: Again, most Bluetooth bulbs you'll find on Amazon are shit. There's a whole bunch of weird custom protocols and the quality of the bulbs is just bad. If you're going to go with anything, go with the C by GE bulbs. Their protocol is still some AES-encrypted custom binary thing, but they use a Bluetooth controller from Telink that supports a mesh network protocol. This means that you can talk to any bulb in your network and still send commands to other bulbs - the dual advantages here are that you can communicate with bulbs that are outside the range of your control device and also that you can have as many control devices as you have bulbs. If you've bought into the Google Home ecosystem, you can associate them directly with a Home and use Google Assistant to control them remotely. GE also sell a wifi bridge - I have one, but haven't had time to review it yet, so make no assertions around its competence. The colour bulbs are also disappointing, with much dimmer colour output than white output.

Zigbee
Advantages: Zigbee is a mesh protocol, so bulbs can forward messages to each other. The bulbs are also pretty cheap. Zigbee is a standard, so you can obtain bulbs from several vendors that will then interoperate - unfortunately there are actually two separate standards for Zigbee bulbs, and you'll sometimes find yourself with incompatibility issues there.
Disadvantages: Your phone doesn't have a Zigbee radio, so you can't communicate with the bulbs directly. You'll need a hub of some sort to bridge between IP and Zigbee. The ecosystem is kind of a mess, and you may have weird incompatibilities.
Which should you get: Pretty much every vendor that produces Zigbee bulbs also produces a hub for them. Don't get the Sengled hub - anyone on the local network can perform arbitrary unauthenticated command execution on it. I've previously recommended the Ikea Tradfri, which at the time only had local control. They've since added remote control support, and I haven't investigated that in detail. But overall, I'd go with the Philips Hue. Their colour bulbs are simply the best on the market, and their security story seems solid - performing a factory reset on the hub generates a new keypair, and adding local control users requires a physical button press on the hub to allow pairing. Using the Philips hub doesn't tie you into only using Philips bulbs, but right now the Philips bulbs tend to be as cheap (or cheaper) than anything else.

But what about
If you're into tying together all kinds of home automation stuff, then either go with Smartthings or roll your own with Home Assistant. Both are definitely more effort if you only want lighting.

My priority is software freedom
Excellent! There are various bulbs that can run the Espurna or AiLight firmwares, but you'll have to deal with flashing them yourself. You can tie that into Home Assistant and have a completely free stack. If you're ok with your bulbs being proprietary, Home Assistant can speak to most types of bulb without an additional hub (you'll need a supported Zigbee USB stick to control Zigbee bulbs), and will support the C by GE ones as soon as I figure out why my Bluetooth transmissions stop working every so often.

Conclusion
Outside niche cases, just buy a Hue. Philips have done a genuinely good job. Don't buy cheap wifi bulbs. Don't buy a Sengled hub.

(Disclaimer: I mentioned a Google product above. I am a Google employee, but do not work on anything related to Home.)

comments

Chris Lamb: Free software activities in June 2019

1 July, 2019 - 02:04

Here is my monthly update covering what I have been doing in the free software world during June 2019 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom. Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month:

I then spent significant time working on buildinfo.debian.net, my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them. This included:

  • Started making the move to Python 3.x (and Django 2.x) [...][...][...][...][...][...][...], additionally performing a large number of adjacent cleanups including dropping the authentication framework [...], fixing a number of flake8 warnings [...], adding a setup.cfg to silence some warnings [...], moving to __str__ and str.format(...) over %-style interpolation and u"unicode" strings [...], etc.

  • I also added a number of (as-yet unreleased…) features, including caching the expensive landing page queries. [...]

  • Took the opportunity to start migrating the hosting from its current GitHub home to a more-centralised repository on salsa.debian.org, moving from the Travis to the GitLab continuous integration platform, updating the URL to the source in the footer [...] and many other related changes [...].

  • Applied the Black "uncompromising code formatter" to the codebase. [...]

I also made the following changes to our tooling:

  • strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, I added support for the clamp#ing of tIME chunks in .png files. [...]

  • In diffoscope (our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues) I documented that run_diffoscope should not be considered a stable API [...] and adjusted the configuration to build our the Docker image from the current Git checkout, not the Debian archive [...]

Finally, I spent significant amount of time working on our website this month, including:

  • Move the remaining site to the newer website design. This was a long-outstanding task (#2) and required a huge number of changes, including moving all the event and documentation pages to the new design [...] and migrating/merging the old _layouts/page.html into the new design [...] too. This could then allow for many cleanups including moving/deleting files into cleaner directories, dropping a bunch of example layouts [...] and dropping the old "home" layout. [...]

  • Adding reports to the homepage. (#16)

  • I also took the opportunity to re-order and merge various top-level sections of the site to make the page easier to parse/navigate [...][... and I updated the documentation for SOURCE_DATE_EPOCH to clarify that the alternative -r call to date(1) is for compatibility with BSD variants of UNIX [...].

  • Made a large number of visual fixups, particularly to accommodate the principles of responsive web design. [...][...][...][...][...]

  • Updated the lint functionality of the build system to check for URIs that are not using {{ "/foo/" | prepend: site.baseurl }}-style relative URLs. [...]


Debian Lintian

Even more hacking on the Lintian static analysis tool for Debian packages, including the following new features:

  • Warn about files referencing /usr/bin/foo if the binary is actually installed under /usr/sbin/foo. (#930702)
  • Support --suppress-tags-from-file in the configuration file. (#930700)

… and the following bug fixes:

Debian LTS

This month I have worked 17 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

Dirk Eddelbuettel: RProtoBuf 0.4.14

30 June, 2019 - 23:48

A new release 0.4.14 of RProtoBuf is arriving at CRAN. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release contains two very helpful pull requests by Jarod Meng that solidify behaviour in two corner cases of message translation. Jeroen also updated the Windows build settings which will help with the upcoming transition to a new Rtools version.

Changes in RProtoBuf version 0.4.14 (2019-06-30)
  • An all.equal.Message method was added to avoid a fallback to the generic (Jarod Meng in #54 fixing #53)

  • Recursive fields now handled by identical() (Jarod Meng in #57 fixing #56)

  • Update Windows build infrastructure (Jeroen)

CRANberries provides the usual diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Ben Hutchings: Debian LTS work, June 2019

30 June, 2019 - 23:30

I was assigned 17 hours of work by Freexian's Debian LTS initiative and worked all those hours this month.

I applied a number of security fixes to Linux 3.16, including those for the TCP denial-of-service vulnerabilities. I uploaded the updated package to jessie and issued DLA-1823.

I backported the corresponding security update for Linux 4.9 from stretch to jessie and issued DLA-1824.

I also prepared and released Linux 3.16.69 with most of the same security fixes, excluding those that weren't yet applied upstream.

Russ Allbery: DocKnot 3.00

30 June, 2019 - 12:32

This package started as only a documentation generator, but my goal for some time has been to gather together all of the tools and random scripts I use to maintain my web site and free software releases. This release does a bunch of internal restructuring to make it easier to add new commands, and then starts that process by adding a docknot dist command. This performs some (although not all) of the actions I currently use my release script for, and provides a platform for ensuring that the full package test suite is run as part of generating a distribution tarball.

This has been half-implemented for quite a while before I finally found the time to finish off a release. Hopefully releases will come a bit faster in the future.

Also in this release are a few tweaks to the DocKnot output (including better support for orphaned packages), and some hopeful fixes for test suite failures on Windows (although I'm not sure how useful this package will be in general on Windows).

You can get the latest version from the DocKnot distribution page or from CPAN.

Keith Packard: snekboard

30 June, 2019 - 07:57
SnekBoard and Lego

I was hoping to use some existing boards for snek+Lego, but I haven't found anything that can control 9V motors. So, I designed SnekBoard.

(click on the picture to watch the demo in motion!)

Here's the code:

def setservo(v):
    if v < 0: setleft(); v = -v
    else: setright()
    setpower(v)

def track(sensor,motor):
    talkto(motor)
    setpower(0)
    setright()
    on()
    while True:
        setservo(read(sensor) * 2 - 1)

track(ANALOG1, MOTOR2)
SnekBoard Hardware

SnekBoard is made from:

  1. SAMD21G18A processor. This is the same chip found in many Arduino boards, including some from Adafruit. It's a ARM Cortex M0 with 256kB of flash and 32kB of of RAM.

  2. Lithium Polymer battery. This uses the same connector found on batteries made by SparkFun and Adafruit. There's a battery charger on the board powered from USB so it will always be charging when connected to the computer.

  3. 9V boost power supply. Lego motors for the last many years have run on 9V. Instead of using 9V worth of batteries, using a boost regulator means the board can run off a single cell LiPo.

  4. Four motor controllers for Lego motors and servos. The current boards use a TI DRV9938, which provides up to 1.5A.

  5. Two NeoPixels

  6. Eight GPIOs with 3.3V and GND available for each one.

  7. One blue LED.

Getting SnekBoard Built

The SnekBoard PCBs arrived from OshPark a few days ago and I got them assembled and running. OshPark now has an associated stencil service, and I took advantage of that to get a stainless stencil along with the boards. The DRV8838 chips have small enough pads enough that my home-cut stencils don't work reliably, so having a 'real' stencil really helps. I ordered a 4mil stencil, which was probably too thick. They offer 3mil, and I think that would have reduced some of the bridging I got from having too much paste on the board.

Flashing a Bootloader on SnekBoard

I forked the Adafruit UF2 boot loader and added definitions for this board. The version of GCC provided in Debian appears to generate larger code than the newest upstream version, so I wasn't able to add the NeoPixel support, but the boot loader is happy enough to use the blue LED to indicate status.

STLink V2 vs SAMD21

I've got an STLink V2 SWD dongle which I use on all of my Arm boards for debugging. It appears that this device has a limitation in how it can access memory on the target; it can either use 8-bit or 32-bit accesses, but not 16-bit. That's usually just fine, but there's one register in the flash memory controller on the SAMD21 which requires atomic 16-bit accesses.

The STLinkV2 driver for OpenOCD emulates 16-bit accesses using two 8-bit accesses, causing all flash operations to fail. Fixing this was pretty simple, the 2 bytes following the relevant register aren't used, so I switched the 16-bit access to a 32-bit access. That solved the problem and I was able to flash the bootloader. I've submitted an OpenOCD patch including this upstream and pushed the OpenOCD fork to github.

Snek on the SnekBoard

Snek already supports the target processor; all that was needed for this port was to describe the GPIOs and configure the clocks. This port is on the master branch of the snek repository.

All of the hardware appears to work correctly, except that I haven't tested the 16MHz crystal which I plan to use for a more precise time source.

SnekBoard and Lego Motors

You can see a nice description of pretty much every motor Lego has ever made on Philo's web site. I've got a small selection of them, including:

  1. Electric Technic Mini-Motor 9v (71427)
  2. Power Functions Medium motor (8883)
  3. Power Functions Large motor (88003)
  4. Power Functions XL motor (8882)
  5. Power Functions Servo Motor 88004

In testing, all of them except the Power Functions Medium motor work great. That motor refused to start and just sat on the bench whinging (at about 1kHz). Reading through the DRV8838 docs, I discovered that if the motor consumes more than about 2A for more than 1µs, the chip will turn off the output, wait 1ms and try again.

So I hooked the board up to my oscilloscope and took a look and here's what I saw:

The upper trace is the 9V rail, which looks solid. The lower trace is the motor control output. At 500µs/div, you can see that it's cycling every 1ms, just like the chip docs say it will do in over current situations.

I zoomed in to the very start of one of the cycles and saw this:

This one is scaled to 500ns/div, and you can see that the power is high for a bit more than 1µs, and then goes a bit wild before turning off.

So the Medium motor draws so much current at startup that the DRV8838 turns it off, waits 1ms and tries again. Hence the 1kHz whine heard from the motor.

I tried to measure the current going into the motor with my DVM, but when I did that, just the tiny additional resistance from the DVM caused the motor to start working (!).

Swapping out the Motor Controller

I spent a bunch of time looking for a replacement motor controller; the SnekBoard is a bit special as I want a motor controller that takes direction and PWM instead of PWM1/PWM2, which is what you usually find on an H-bridge set. The PWM1/PWM2 mode is both simpler and more flexible as it allows both brake and coast modes, but it requires two PWM outputs from the SoC for each controller. I found the DRV8876, which provides 3.5A of current instead of 1.5A. That "should" be plenty for even the Medium motor.

Future Plans

I'll get new boards made and loaded to make sure the updated motor controller works. After that, I'll probably build half a dozen or so in time for class this October. I'm wondering if other people would like some of these boards, and if so, how I should go about making them available. Suggestions welcome!

Dirk Eddelbuettel: <!DOCTYPE html>

29 June, 2019 - 22:20
rvw 0.6.0: First release rvw 0.6.0: First release

Note: Crossposted by Ivan, James and myself.

Today Dirk Eddelbuettel, James Balamuta and Ivan Pavlov are happy to announce the first release of a reworked R interface to the Vowpal Wabbit machine learning system.

Started as a GSoC 2018 project, the new rvw package was built to give R users easier access to a variety of efficient machine learning algorithms. Key features that promote this idea and differentiate the new rvw from existing Vowpal Wabbit packages in R are:

  • A reworked interface that simplifies model manipulations (direct usage of CLI arguments is also available)
  • Support of the majority of Vowpal Wabbit learning algorithms and reductions
  • Extended data.frame converter covering different variations of Vowpal Wabbit input formats

Below is a simple example of how to use the renewed rvw’s interface:

library(rvw)
library(mlbench)   # for a dataset

# Basic data preparation
data("BreastCancer", package = "mlbench")
data_full <- BreastCancer
ind_train <- sample(1:nrow(data_full), 0.8*nrow(data_full))
data_full <- data_full[,-1]
data_full$Class <- ifelse(data_full$Class == "malignant", 1, -1)
data_train <- data_full[ind_train,]
data_test <- data_full[-ind_train,]

# Simple Vowpal Wabbit model for binary classification
vwmodel <-  vwsetup(dir = "./",
                    model = "mdl.vw",
                   option = "binary")

# Training 
vwtrain(vwmodel = test_vwmodel,
       data = data_train,
       passes = 10,
       targets = "Class")

# And testing
vw_output <- vwtest(vwmodel = test_vwmodel, data = data_test)

More information is available in the Introduction and Examples sections of the wiki.

The rvw links directly to libvw and so initially we offer a Docker container in order to ship the most up to date package with everything needed.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russell Coker: Long-term Device Use

29 June, 2019 - 18:37

It seems to me that Android phones have recently passed the stage where hardware advances are well ahead of software bloat. This is the point that desktop PCs passed about 15 years ago and laptops passed about 8 years ago. For just over 15 years I’ve been avoiding buying desktop PCs, the hardware that organisations I work for throw out is good enough that I don’t need to. For the last 8 years I’ve been avoiding buying new laptops, instead buying refurbished or second hand ones which are more than adequate for my needs. Now it seems that Android phones have reached the same stage of development.

3 years ago I purchased my last phone, a Nexus 6P [1]. Then 18 months ago I got a Huawei Mate 9 as a warranty replacement [2] (I had swapped phones with my wife so the phone I was using which broke was less than a year old). The Nexus 6P had been working quite well for me until it stopped booting, but I was happy to have something a little newer and faster to replace it at no extra cost.

Prior to the Nexus 6P I had a Samsung Galaxy Note 3 for 1 year 9 months which was a personal record for owning a phone and not wanting to replace it. I was quite happy with the Note 3 until the day I fell on top of it and cracked the screen (it would have been ok if I had just dropped it). While the Note 3 still has my personal record for continuous phone use, the Nexus 6P/Huawei Mate 9 have the record for going without paying for a new phone.

A few days ago when browsing the Kogan web site I saw a refurbished Mate 10 Pro on sale for about $380. That’s not much money (I usually have spent $500+ on each phone) and while the Mate 9 is still going strong the Mate 10 is a little faster and has more RAM. The extra RAM is important to me as I have problems with Android killing apps when I don’t want it to. Also the IP67 protection will be a handy feature. So that phone should be delivered to me soon.

Some phones are getting ridiculously expensive nowadays (who wants to walk around with a $1000+ Pixel?) but it seems that the slightly lower end models are more than adequate and the older versions are still good.

Cost Summary

If I can buy a refurbished or old model phone every 2 years for under $400 that will make using a phone cost about $0.50 per day. The Nexus 6P cost me $704 in June 2016 which means that for the past 3 years my phone cost was about $0.62 per day.

It seems that laptops tend to last me about 4 years [3], and I don’t need high-end models (I even used one from a rubbish pile for a while). The last laptops I bought cost me $289 for a Thinkpad X1 Carbon [4] and $306 for the Thinkpad T420 [5]. That makes laptops about $0.20 per day.

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet for $579. That is still working very well for me today, apart from only having 32G of internal storage space and an OS update preventing Android apps from writing to the micro SD card (so I have to use USB to copy TV shows on to it) there’s nothing more than I need from a tablet. Strangely I even get good battery life out of it, I can use it for a couple of hours without the battery running out. Battery life isn’t nearly as good as when it was new, but it’s still OK for my needs. As Samsung stopped providing security updates I can’t use the tablet as a SSH client, but now that my primary laptop is a small and light model that’s less of an issue. Currently that tablet has cost me just over $0.30 per day and it’s still working well.

Currently it seems that my hardware expense for the forseeable future is likely to be about $1 per day. 20 cents for laptop, 30 cents for tablet, and 50 cents for phone. The overall expense is about $1.66 per month as I’m on a $20 per month pre-paid plan with Aldi Mobile.

Saving Money

A laptop is very important to me, the amounts of money that I’m spending don’t reflect that. But it seems that I don’t have any option for spending more on a laptop (the Thinkpad X1 Carbon I have now is just great and there’s no real option for getting more utility by spending more). I also don’t have any option to spend less on a tablet, 5 years is a great lifetime for a device that is practically impossible to repair (repair will cost a significant portion of the replacement cost).

I hope that the Mate 10 can last at least 2 years which will make it a new record for low cost of ownership of a phone for me. If app vendors can refrain from making their bloated software take 50% more RAM in the next 2 years that should be achievable.

The surprising thing I learned while writing this post is that my mobile phone expense is the largest of all my expenses related to mobile computing. Given that I want to get good reception in remote areas (needs to be Telstra or another company that uses their network) and that I need at least 3GB of data transfer per month it doesn’t seem that I have any options for reducing that cost.

Related posts:

  1. A Long Term Review of Android Devices Xperia X10 My first Android device was The Sony Ericsson...
  2. Huawei Mate9 Warranty Etc I recently got a Huawei Mate 9 phone....
  3. Android Device Service Life In recent years Android devices have been the most expensive...

Gunnar Wolf: Updates from Raspberrypi-land

29 June, 2019 - 12:06

Yay!

I was feeling sad and depressed because it's already late June... And I had not had enough time to get the unofficial Debian Buster Raspberry preview images booting on the entry-level models of the family (Broadcom 2835-based Raspberries 1A, 1B, 0 and 0W). But, this morning I found a very interesting pull request open in our GitHub repository!

Dispatched some piled-up work, and set an image build. Some minutes later, I had a shiny image, raspi0w.tar.gz. Quickly fired up dd to prepare an SD card. Searched for my RPi0w under too many papers until I found it. Connected to my trusty little monitor, and...

So, as a spoiler for my DebConf talk... Yes! We have (apparent, maybe still a bit incomplete) true Debian-plus-the-boot-blob, straight-Buster support for the whole Raspberry Pi family all of the raspberries sold until last month (yeah, the RPi4 is probably not yet supported — the kernel does not yet have a Device Tree for it. But it should be fixed soon, hopefully!)

AttachmentSize IMG_20190628_235934.1500.jpg580.38 KB IMG_20190629_000257.1500.jpg420.63 KB

Bits from Debian: Diversity and inclusion in Debian: small actions and large impacts

29 June, 2019 - 05:40

The Debian Project always has and always will welcome contributions from people who are willing to work on a constructive level with each other, without discrimination.

The Diversity Statement and the Code of Conduct are genuinely important parts of our community, and over recent years some other things have been done to make it clear that they aren't just words.

One of those things is the creation of the Debian Diversity Team: it was announced in April 2019, although it had already been working for several months before as a welcoming space for, and a way of increasing visibility of, underrepresented groups within the Debian project.

During DebConf19 in Curitiba there will be a dedicated Diversity and Welcoming Team. It will consist of people from the Debian community to offer a contact point when you feel lost or uneasy. The DebConf team is also in contact with a local LGBTIQA+ support group for exchange of safety concerns and information with respect to Brazil in general.

Today Debian also recognizes the impact LGBTIQA+ people have had in the world and within the Debian project, joining the worldwide Pride celebrations. We show it by changing our logo for this time to the Debian Diversity logo, and encourage all Debian members and contributors to show their support of a diverse and inclusive community.

Daniel Kahn Gillmor: Community Impact of OpenPGP Certificate Flooding

29 June, 2019 - 02:00
Community Impact of OpenPGP Certificate Flooding

I wrote yesterday about a recent OpenPGP certificate flooding attack, what I think it means for the ecosystem, and how it impacted me. This is a brief followup, trying to zoom out a bit and think about why it affected me emotionally the way that it did.

One of the reasons this situation makes me sad is not just that it's more breakage that needs cleaning up, or even that my personal identity certificate was on the receiving end. It's that it has impacted (and will continue impacting at least in the short term) many different people -- friends and colleagues -- who I know and care about. It's not just that they may be the next targets of such a flooding attack if we don't fix things, although that's certainly possible. What gets me is that they were affected because they know me and communicate with me. They had my certificate in their keyring, or in some mutually-maintained system, and as a result of what we know to be good practice -- regular keyring refresh -- they got burned.

Of course, they didn't get actually, physically burned. But from several conversations i've had over the last 24 hours, i know personally at least a half-dozen different people who i personally know have lost hours of work, being stymied by the failing tools, some of that time spent confused and anxious and frustrated. Some of them thought they might have lost access to their encrypted e-mail messages entirely. Others were struggling to wrestle a suddenly non-responsive machine back into order. These are all good people doing other interesting work that I want to succeed, and I can't give them those hours back, or relieve them of that stress retroactively.

One of the points I've been driving at for years is that the goals of much of the work I care about (confidentiality; privacy; information security and data sovereignty; healthy communications systems) are not individual goods. They are interdependent, communally-constructed and communally-defended social properties.

As an engineering community, we failed -- and as an engineer, I contributed to that failure -- at protecting these folks in this instance about because we left things sloppy and broken and supposedly "good enough".

Fortunately, this failure isn't the worst situation. There's no arbitrary code execution, no permanent data loss (unless people get panicked and delete everything), no accidental broadcast of secrets that shouldn't be leaked.

And as much as this is a community failure, there are also communities of people who have recognized these problems and have been working to solve them. So I'm pretty happy that good people have been working on infrastructure that saw this coming, and were preparing for it, even if their tools haven't been as fully implemented (or as widely adopted) as they should be yet. Those projects include:

  • Autocrypt -- which avoids any interaction with the keyserver network in favor of in-band key discovery.

  • Web Key Directory or WKD, which maps e-mail addresses to a user-controlled publication space for their OpenPGP Keys.

  • DANE OPENPGPKEY which lets a domain owner publish their user's minimal OpenPGP certificates in the DNS directly.

  • Hagrid, the implementation behind the keys.openpgp.org keyserver, which presents the opportunity for a updates-only interface as well as a place for people to publish their certificates if their domain controller doesn't support WKD or DANE OPENPGPKEY. Hagrid is also an excellent first public showing for the Sequoia project, a Rust-based implementation of the OpenPGP standards that hopefully we can build more tooling on top of in the years to come.

Let's keep pushing these community-driven approaches forward and get the ecosystem to a healthier place.

Mike Gabriel: List Open Files for a Running Application/Service

28 June, 2019 - 14:03

This is merely a little reminder to myself:

for pid in `ps -C <process-name> -o pid=`; do ls -l "/proc/$pid/fd"; done

On Linux, this returns a list of file handles being held open by all instances of <process-name>.

Daniel Kahn Gillmor: OpenPGP Certificate Flooding

28 June, 2019 - 11:00
OpenPGP Certificate Flooding

My public cryptographic identity has been spammed to the point where it is unusable in standard workflows. This blogpost talks about what happened, what I'm doing about it, and what it means for the broader ecosystem.

If you work with me and you use OpenPGP certificates to do so, the crucial things you should know are:

  • Do not refresh my OpenPGP certificate from the SKS keyserver network.

  • Use a constrained keyserver like keys.openpgp.org if you want to check my certificate for updates like revocation, expiration, or subkey rollover.

  • Use an Autocrypt-capable e-mail client, WKD, or direct download from my server to find my certificate in the first place.

  • If you have already fetched my certificate in the last week, and it is bloated, or your GnuPG instance is horribly slow as a result, you probably want to delete it and then recover it via one of the channels described above.

What Happened?

Some time in the last few weeks, my OpenPGP certificate, 0xC4BC2DDB38CCE96485EBE9C2F20691179038E5C6 was flooded with bogus certifications which were uploaded to the SKS keyserver network.

SKS is known to be vulnerable to this kind of Certificate Flooding, and is difficult to address due to the synchronization mechanism of the SKS pool. (SKS's synchronization assumes that all keyservers have the same set of filters). You can see discussion about this problem from a year ago along with earlier proposals for how to mitigate it. But none of those proposals have quite come to fruition, and people are still reliant on the SKS network.

Previous Instances of Certificate Flooding

We've seen various forms of certificate flooding before, including spam on Werner Koch's key over a year ago, and abuse tools made available years ago under the name "trollwot". There's even a keyserver-backed filesystem proposed as a proof of concept to point out the abuse.

There was even a discussion a few months ago about how the SKS keyserver network is dying.

So none of this is a novel or surprising problem. However, the scale of spam attached to certificates recently appears to be unprecedented. It's not just mine: Robert J, Hansen's certificate has also been spammed into oblivion as well. The signature spam on Werner's certificate, for example is "only" about 5K signatures (a total of < 1MiB), whereas the signature spam attached to mine is more like 55K signatures for a total of 17MiB, and rjh's is more than double that.

What Problems does Certificate Flooding Cause?

The fact that my certificate is flooded quite this badly provides an opportunity to see what breaks. I've been filing bug reports and profiling problems over the last day.

GnuPG can't even import my certificate from the keyservers any more in the common case. This also has implications for ensuring that revocations are discovered, or new subkeys rotated, as described in that ticket.

In the situations where it's possible to have imported the large certificate, gpg exhibits severe performance problems for even basic operations over the keyring.

This causes Enigmail to become unusable if it encounters a flooded certificate.

It also causes problems for monkeysphere-authentication if it encounters a flooded certificate.

There are probably more! If you find other problems for tools that deal with these sort of flooded certs, please report bugs appropriately.

Dealing with Certificate Flooding

What can we do about this? Months ago, i wrote a draft about abuse-resistant keystores that outlined these problems and what we need from a keyserver.

Use Abuse-Resistant Keystores to Refresh Certificates

If the purpose of refreshing your certificate is to find key material updates and revocations, we need to use an abuse-resistant keyserver or network of keyservers for that.

Fortunately, keys.openpgp.org is just such a service, and it was recently launched. It seems to work! It can distribute revocations and subkey rollovers automatically, even if you don't have a user ID for the certificate. You can do this by putting the following line in ~/.gnupg/dirmngr.conf

keyserver hkps://keys.openpgp.org

and ensure that there is no keyserver entry at all in ~/.gnupg/gpg.conf.

This keyserver doesn't distribute third-party certifications at all, though. And if the owner of the e-mail address hasn't confirmed with the operators of keys.openpgp.org that they want that keyserver to distribute their certificate, it won't even distribute the certificate's user IDs.

Fix GnuPG to Import certificate updates even without User IDs

Unfortunately, GnuPG doesn't cope well with importing minimalist certificates. I've applied patches for this in debian experimental (and they're documented in debian as #930665), but those fixes are not yet adopted upstream, or widely deployed elsewhere.

In-band Certificate Discovery

Refreshing certificates is only part of the role that keyserver networks play. Another is just finding OpenPGP certificates in the first place.

The best way to find a certificate is if someone just gives it to you in the context that it makes sense.

The Autocrypt project is an example of this pattern for e-mail messages. If you can adopt an Autocrypt-capable e-mail client, you should, since that will avoid needing to search for keys at all when dealing with e-mail. Unfortunately, those implementations are also not widely available yet.

Certificate Lookup via WKD or DANE

If you're looking up an OpenPGP certificate by e-mail address, you should try looking it up via some mechanism where the address owner (or at least the domain owner) can publish the record. The current best examples of this are WKD and DANE's OPENPGPKEY DNS records. Modern versions of GnuPG support both of these methods. See the auto-key-locate documentation in gpg(1).

Conclusion

This is a mess, and it's a mess a long time coming. The parts of the OpenPGP ecosystem that rely on the naive assumptions of the SKS keyserver can no longer be relied on, because people are deliberately abusing those keyservers. We need significantly more defensive programming, and a better set of protocols for thinking about how and when to retrieve OpenPGP certificates.

A Personal Postscript

I've spent a significant amount of time over the years trying to push the ecosystem into a more responsible posture with respect to OpenPGP certificates, and have clearly not been as successful at it or as fast as I wanted to be. Complex ecosystems can take time to move.

To have my own certificate directly spammed in this way felt surprisingly personal, as though someone was trying to attack or punish me, specifically. I can't know whether that's actually the case, of course, nor do i really want to. And the fact that Robert J. Hansen's certificate was also spammed makes me feel a little less like a singular or unique target, but I also don't feel particularly proud of feeling relieved that someone else is also being "punished" in addition to me.

But this report wouldn't be complete if I didn't mention that I've felt disheartened and demotivated by this situation. I'm a stubborn person, and I'm trying to make the best of the situation by being constructive about at least documenting the places that are most severely broken by this. But I've also found myself tempted to walk away from this ecosystem entirely because of this incident. I don't want to be too dramatic about this, but whoever did this basically experimented on me (and Robert) directly, and it's a pretty shitty thing to do.

If you're reading this, and you set this off, and you selected me specifically because of my role in the OpenPGP ecosystem, or because I wrote the abuse-resistant-keystore draft, or because I'm part of the Autocrypt project, then you should know that I care about making this stuff work for people. If you'd reached out to me to describe what you were planning to do, we could have done all of the above bug reporting and triage using demonstration certificates, and worked on it together. I would have happily helped. I still might! But because of the way this was done, I'm not feeling particularly happy right now. I hope that someone is, somewhere.

Daniel Kahn Gillmor: OpenPGP Certificate Flooding

28 June, 2019 - 11:00
OpenPGP Certificate Flooding

My public cryptographic identity has been spammed to the point where it is unusable in standard workflows. This blogpost talks about what happened, what I'm doing about it, and what it means for the broader ecosystem.

If you work with me and you use OpenPGP certificates to do so, the crucial things you should know are:

  • Do not refresh my OpenPGP certificate from the SKS keyserver network.

  • Use a constrained keyserver like keys.openpgp.org if you want to check my certificate for updates like revocation, expiration, or subkey rollover.

  • Use an Autocrypt-capable e-mail client, WKD, or direct download from my server to find my certificate in the first place.

  • If you have already fetched my certificate in the last week, and it is bloaated, or your GnuPG instance is horribly slow as a result, you probably want to delete it and then recover it via one of the channels described above.

What Happened?

Some time in the last few weeks, my OpenPGP certificate, 0xC4BC2DDB38CCE96485EBE9C2F20691179038E5C6 was flooded with bogus certifications which were uploaded to the SKS keyserver network.

SKS is known to be vulnerable to this kind of Certificate Flooding, and is difficult to address due to the synchronization mechanism of the SKS pool. (SKS's synchronization assumes that all keyservers have the same set of filters). You can see discussion about this problem from a year ago along with earlier proposals for how to mitigate it. But none of those proposals have quite come to fruition, and people are still reliant on the SKS network.

Previous Instances of Certificate Flooding

We've seen various forms of certificate flooding before, including spam on Werner Koch's key over a year ago, and abuse tools made available years ago under the name "trollwot". There's even a keyserver-backed filesystem proposed as a proof of concept to point out the abuse.

There was even a discussion a few months ago about how the SKS keyserver network is dying.

So none of this is a novel or surprising problem. However, the scale of spam attached to certificates recently appears to be unprecedented. It's not just mine: Robert J, Hansen's certificate has also been spammed into oblivion as well. The signature spam on Werner's certificate, for example is "only" about 5K signatures (a total of < 1MiB), whereas the signature spam attached to mine is more like 55K signatures for a total of 17MiB, and rjh's is more than double that.

What Problems does Certificate Flooding Cause?

The fact that my certificate is flooded quite this badly provides an opportunity to see what breaks. I've been filing bug reports and profiling problems over the last day.

GnuPG can't even import my certificate from the keyservers any more in the common case. This also has implications for ensuring that revocations are discovered, or new subkeys rotated, as described in that ticket.

In the situations where it's possible to have imported the large certificate, gpg exhibits severe performance problems for even basic operations over the keyring.

This causes Enigmail to become unusable if it encounters a flooded certificate.

It also causes problems for monkeysphere-authentication if it encounters a flooded certificate.

There are probably more! If you find other problems for tools that deal with these sort of flooded certs, please report bugs appropriately.

Dealing with Certificate Flooding

What can we do about this? Months ago, i wrote a draft about abuse-resistant keystores that outlined these problems and what we need from a keyserver.

Use Abuse-Resistant Keystores to Refresh Certificates

If the purpose of refreshing your certificate is to find key material updates and revocations, we need to use an abuse-resistant keyserver or network of keyservers for that.

Fortunately, keys.openpgp.org is just such a service, and it was recently launched. It seems to work! It can distribute revocations and subkey rollovers automatically, even if you don't have a user ID for the certificate. You can do this by putting the following line in ~/.gnupg/dirmngr.conf

keyserver hkps://keys.openpgp.org

and ensure that there is no keyserver entry at all in ~/.gnupg/gpg.conf.

This keyserver doesn't distribute third-party certifications at all, though. And if the owner of the e-mail address hasn't confirmed with the operators of keys.openpgp.org that they want that keyserver to distribute their certificate

Fix GnuPG to Import certificate updates even without User IDs

Unfortunately, GnuPG doesn't cope well with importing minimalist certificates. I've applied patches for this in debian experimental (and they're documented in debian as #930665, but those fixes are not yet adopted upstream, or widely deployed elsewhere.

In-band Certificate Discovery

Refreshing certificates is only part of the role that keyserver networks play. Another is just finding OpenPGP certificates in the first place.

The best way to find a certificate is if someone just gives it to you in the context that it makes sense.

The Autocrypt project is an example of this pattern for e-mail messages. If you can adopt an Autocrypt-capable e-mail client, you should, since that will avoid needing to search for keys at all when dealing with e-mail. Unfortunately, those implementations are also not widely available yet.

Certificate Lookup via WKD or DANE

If you're looking up an OpenPGP certificate by e-mail address, you should try looking it up via some mechanism where the address owner (or at least the domain owner) can publish the record. The current best examples of this are WKD and DANE's OPENPGPKEY DNS records. Modern versions of GnuPG support both of these methods. See the auto-key-locate documentation in gpg(1).

Conclusion

This is a mess, and it's a mess a long time coming. The parts of the OpenPGP ecosystem that rely on the naive assumptions of the SKS keyserver can no longer be relied on, because people are deliberately abusing those keyservers. We need significantly more defensive programming, and a better set of protocols for thinking about how and when to retrieve OpenPGP certificates.

A Personal Postscript

I've spent a significant amount of time over the years trying to push the ecosystem into a more responsible posture with respect to OpenPGP certificates, and have clearly not been as successful at it or as fast as I wanted to be. Complex ecosystems can take time to move.

To have my own certificate directly spammed in this way felt surprisingly personal, as though someone was trying to attack or punish me, specifically. I can't know whether that's actually the case, of course, nor do i really want to. And the fact that Robert J. Hansen's certificate was also spammed makes me feel a little less like a singular or unique target, but I also don't feel particularly pround of feeling relieved that someone else is also being "punished" in addition to me.

But this report wouldn't be complete if I didn't mention that i've felt disheartened and demotivated by this situation. I'm a stubborn person, and I'm trying to make the best of the situation by being constructive about at least documenting the places that are most severely broken by this. But I've also found myself tempted to walk away from this ecosytsem entirely because of this incident. I don't want to be too dramatic about this, but whoever did this basically experimented on me (and Robert) directly, and it's a pretty shitty thing to do.

If you're reading this, and you set this off, and you selected me specifically because of my role in the OpenPGP ecosystem, or because I wrote the abuse-resistant-keystore draft, or because I'm part of the Autocrypt project, then you should know that I care about making this stuff work for people. If you'd reached out to me to describe what you were planning to do, we could have done all of the above bug reporting and triage using demonstration certificates, and worked on it together. I would have happily helped. I still might! But because of the way this was done, I'm not feeling particularly happy right now. I hope that someone is, somewhere.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้