Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 8 min ago

Jonathan McDowell: Rational thoughts on the GitHub ToS change

3 March, 2017 - 01:13

I woke this morning to Thorsten claiming the new GitHub Terms of Service could require the removal of Free software projects from it. This was followed by joeyh removing everything from github. I hadn’t actually been paying attention, so I went looking for some sort of summary of whether I should be worried and ended up reading the actual ToS instead. TL;DR version: No, I’m not worried and I don’t think you should be either.

First, a disclaimer. I’m not a lawyer. I have some legal training, but none of what I’m about to say is legal advice. If you’re really worried about the changes then you should engage the services of a professional.

The gist of the concerns around GitHub’s changes are that they potentially circumvent any license you have applied to your code, either converting GPL licensed software to BSD style (and thus permitting redistribution of binary forms without source) or making it illegal to host software under certain Free software licenses on GitHub due to being unable to meet the requirements of those licenses as a result of GitHub’s ToS.

My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service. There are sadly too many people who upload code there without a license, meaning that technically no one can do anything with it. Don’t do this people; make sure that any project you put on GitHub has some sort of license attached to it (don’t write your own - it’s highly likely one of Apache/BSD/GPL will suit your needs) so people know whether they can make use of it or not. “I don’t care” is not a valid reason not to do this.

Section D, relating to user generated content, is the one causing the problems. It’s possibly easiest to walk through each subsection in order.

D1 says GitHub don’t take any responsibility for your content; you make it, you’re responsible for it, they’re not accepting any blame for harm your content does nor for anything any member of the public might do with content you’ve put on GitHub. This seems uncontentious.

D2 reaffirms your ownership of any content you create, and requires you to only post 3rd party content to GitHub that you have appropriate rights to. So I can’t, for example, upload a copy of ‘Friday’ by Rebecca Black.

Thorsten has some problems with D3, where GitHub reserve the right to remove content that violates their terms or policies. He argues this could cause issues with licenses that require unmodified source code. This seems to be alarmist, and also applies to any random software mirror. The intent of such licenses is in general to ensure that the pristine source code is clearly separate from 3rd party modifications. Removal of content that infringes GitHub’s T&Cs is not going to cause an issue.

D4 is a license grant to GitHub, and I think forms part of joeyh’s problems with the changes. It affirms the content belongs to the user, but grants rights to GitHub to store and display the content, as well as make copies such as necessary to provide the GitHub service. They explicitly state that no right is granted to sell the content at all or to distribute the content outside of providing the GitHub service.

This term would seem to be the minimum necessary for GitHub to ensure they are allowed to provide code uploaded to them for download, and provide their web interface. If you’ve actually put a Free license on your code then this isn’t necessary, but from GitHub’s point of view I can understand wanting to make it explicit that they need these rights to be granted. I don’t believe it provides a method of subverting the licensing intent of Free software authors.

D5 provides more concern to Thorsten. It seems he believes that the ability to fork code on GitHub provides a mechanism to circumvent copyleft licenses. I don’t agree. The second paragraph of this subsection limits the license granted to the user to be the ability to reproduce the content on GitHub - it does not grant them additional rights to reproduce outside of GitHub. These rights, to my eye, enable the forking and viewing of content within GitHub but say nothing about my rights to check code out and ignore the author’s upstream license.

D6 clarifies that if you submit content to a GitHub repo that features a license you are licensing your contribution under these terms, assuming you have no other agreement in place. This looks to be something that benefits projects on GitHub receiving contributions from users there; it’s an explicit statement that such contributions are under the project license.

D7 confirms the retention of moral rights by the content owner, but states they are waived purely for the purposes of enabling GitHub to provide service, as stated under D4. In particular this right is revocable so in the event they do something you don’t like you can instantly remove all of their rights. Thorsten is more worried about the ability to remove attribution and thus breach CC-BY or some BSD licenses, but GitHub’s whole model is providing attribution for changesets and tracking such changes over time, so it’s hard to understand exactly where the service falls down on ensuring the provenance of content is clear.

There are reasons to be wary of GitHub (they’ve taken a decentralised revision control system and made a business model around being a centralised implementation of it, and they store additional metadata such as PRs that aren’t as easily extracted), but I don’t see any indication that the most recent changes to their Terms of Service are something to worry about. The intent is clearly to provide GitHub with the legal basis they need to provide their service, rather than to provide a means for them to subvert the license intent of any Free software uploaded.

Antoine Beaupré: A short history of password hashers

2 March, 2017 - 21:45

These are notes from my research that led to the publication of the password hashers article. This article is more technical than the previous ones and compares the various cryptographic primitives and algorithms used in the various software I have reviewed. The criteria for inclusion on this list is fairly vague: I mostly included a password hasher if it was significantly different from the previous implementations in some way, and I have included all the major ones I could find as well.

The first password hashers

Nic Wolff claims to be the first to have written such a program, all the way back in 2003. Back then the hashing algorithm was MD5, although Wolff has now updated the algorithm to use SHA-1 and still maintains his webpage for public use. Another ancient but unrelated implementation, is the Standford University Applied Cryptography's pwdhash software. That implementation was published in 2004 and unfortunately, that implementation was not updated and still uses MD5 as an hashing algorithm, but at least it uses HMAC to generate tokens, which makes the use of rainbow tables impractical. Those implementations are the simplest password hashers: the inputs are simply the site URL and a password. So the algorithms are, basically, for Wolff's:

token = base64(SHA1(password + domain))

And for Standford's PwdHash:

token = base64(HMAC(MD5, password, domain)))
SuperGenPass

Another unrelated implementation that is still around is supergenpass is a bookmarklet that was created around 2007, originally using MD5 as well but now supports SHA512 now although still limited to 24 characters like MD5 (which needlessly limits the entropy of the resulting password) and still defaults MD5 with not enough rounds (10, when key derivation recommendations are more generally around 10 000, so that it's slower to bruteforce).

Note that Chris Zarate, the supergenpass author, actually credits Nic Wolff as the inspiration for his implementation. Supergenpass is still in active development and is available for the browser (as a bookmarklet) or mobile (as an webpage). Supergenpass allows you to modify the password length, but also add an extra profile secret which adds to the password and generates a personalized identicon presumably to prevent phishing but it also introduces the interesting protection, the profile-specific secret only found later in Password Hasher Plus. So the Supergenpass algorithm looks something like this:

token = base64(SHA512(password + profileSecret + ":" + domain, rounds))
The Wijjo Password Hasher

Another popular implementation is the Wijjo Password Hasher, created around 2006. It was probably the first shipped as a browser extension which greatly improved the security of the product as users didn't have to continually download the software on the fly. Wijjo's algorithm also improved on the above algorithms, as it uses HMAC-SHA1 instead of plain SHA-1 or HMAC-MD5, which makes it harder to recover the plaintext. Password Hasher allows you to set different password policies (use digits, punctuation, mixed case, special characters and password length) and saves the site names it uses for future reference. It also happens that the Wijjo Password Hasher, in turn, took its inspiration on different project, hashapass.com, created in 2006 and also based on HMAC-SHA-1. Indeed, hashapass "can easily be generated on almost any modern Unix-like system using the following command line pattern":

echo -n parameter \
| openssl dgst -sha1 -binary -hmac password \
| openssl enc -base64 \
| cut -c 1-8

So the algorithm here is obviously:

token = base64(HMAC(SHA1, password, domain + ":" + counter)))[:8]

... although in the case of Password Hasher, there is a special routine that takes the token and inserts random characters in locations determined by the sum of the values of the characters in the token.

Password Hasher Plus

Years later, in 2010, Eric Woodruff ported the Wijjo Password Hasher to Chrome and called it Password Hasher Plus. Like the original Password Hasher, the "plus" version also keeps those settings in the extension and uses HMAC-SHA-1 to generate the password, as it is designed to be backwards-compatible with the Wijjo Password Hasher. Woodruff did add one interesting feature: a profile-specific secret key that gets mixed in to create the security token, like what SuperGenPass does now. Stealing the master password is therefore not enough to generate tokens anymore. This solves one security concern with Password Hasher: an hostile page could watch your keystrokes and steal your master password and use it to derive passwords on other sites. Having a profile-specific secret key, not accessible to the site's Javascript works around that issue, but typing the master password directly in the password field, while convenient, is just a bad idea, period. The final algorithm looks something like:

token = base64(HMAC(SHA1, password, base64(HMAC(SHA1, profileSecret, domain + ":" + counter))))

Honestly, that seems rather strange, but it's what I read from the source code, which is available only after decompressing the extension nowadays. I would have expected the simplest version:

token = base64(HMAC(SHA1, HMAC(SHA1, profileSecret, password), domain + ":" + counter))

The idea here would be "hide" the master password from bruteforce attacks as soon as possible... But maybe this is all equivalent.

Regardless, Password Hasher Plus then takes the token and applies the same special character insertion routine as the Password Hasher.

LessPass

Last year, Guillaume Vincent a french self-described "humanist and scuba diving fan" released the lesspass extension for Chrome, Firefox and Android. Lesspass introduces several interesting features. It is probably the first to include a commandline version. It also uses a more robust key derivation algorithm (PBKDF2) and takes into account the username on the site, allowing multi account support. The original release (version 1) used only 8192 rounds which is now considered too low. In the bug report it was interesting to note that LessPass couldn't do the usual practice of running the key derivation for 1 second to determine the number of rounds needed as the results need to be deterministic.

At first glance, the LessPass source code seems clear and easy to read which is always a good sign, but of course, the devil is in the details. One key feature that is missing from Password Hasher Plus is the profile-specific seed, although it should be impossible, for a hostile web page to steal keystrokes from a browser extension, as far as I know.

The algorithm then gets a little more interesting:

entropy = PBKDF2(SHA256, masterPassword, domain + username + counter, rounds, length)
where
    rounds=10000
    length=32

entropy is then used to pick characters to match the chosen profile.

Regarding code readability, I got quickly confused by the PBKDF2 implementation: SubtleCrypto.ImportKey() doesn't seem to support PBKDF2 in the API, yet it's how it is used there... Is it just something to extract key material? We see later what looks like a more standard AES-based PBKDF2 implementation, but this code looks just strange to me. It could be me unfamilarity with newer Javascript coding patterns, however.

There is also a lesspass-specific character picking routing that is also not base64, and different from the original Password Hasher algorithm.

Master Password

A review of password hashers would hardly be complete without mentioning the Master Password and its elaborate algorithm. While the applications surrounding the project are not as refined (there is no web browser plugin and the web interface can't be easily turned into a bookmarklet), the algorithm has been well developed. Of all the password managers reviewed here, Master Password uses one of the strongest key derivation algorithms out there, scrypt:

key = scrypt( password, salt, cost, size, parallelization, length )
where
salt = "com.lyndir.masterpassword" + len(username) + name
cost = 32768
size = 8
parallelization = 2
length = 64
entropy = hmac-sha256(key, "com.lyndir.masterpassword" + len(domain) + domain + counter )

Master Password the uses one of 6 sets of "templates" specially crafted to be "easy for a user to read from a screen and type using a keyboard or smartphone" and "compatible with most site's password policies", our "transferable" criteria defined in the first passwords article. For example, the default template mixes vowels, consonants, numbers and symbols, but carefully avoiding possibly visibly similar characters like O and 0 or i and 1 (although it does mix 1 and l, oddly enough).

The main strength of Master Password seems to be the clear definition of its algorithm (although Hashpass.com does give out OpenSSL commandline examples...), which led to its reuse in another application called freepass. The Master Password app also doubles as a stateful password manager...

Other implementations

I have also considered including easypasswords, which uses PBKDF2-HMAC-SHA1, in my list of recommendations. I discovered only recently that the author wrote a detailed review of many more password hashers and scores them according to their relative strength. In the end, I ended up covering more LessPass since the design is very similar and LessPass does seem a bit more usable. Covering LessPass also allowed me to show the contrast and issues regarding the algorithm changes, for example.

It is also interesting to note that the EasyPasswords author has criticized the Master Password algorithm quite severely:

[...] scrypt isn’t being applied correctly. The initial scrypt hash calculation only depends on the username and master password. The resulting key is combined with the site name via SHA-256 hashing then. This means that a website only needs to break the SHA-256 hashing and deduce the intermediate key — as long as the username doesn’t change this key can be used to generate passwords for other websites. This makes breaking scrypt unnecessary[...]

During a discussion with the Master Password author, he outlined that "there is nothing "easy" about brute-force deriving a 64-byte key through a SHA-256 algorithm." SHA-256 is used in the last stage because it is "extremely fast". scrypt is used as a key derivation algorithm to generate a large secret and is "intentionnally slow": "we don't want it to be easy to reverse the master password from a site password". "But it' unnecessary for the second phase because the input to the second phase is so large. A master password is tiny, there are only a few thousand or million possibilities to try. A master key is 8^64, the search space is huge. Reversing that doesn't need to be made slower. And it's nice for the password generation to be fast after the key has been prepared in-memory so we can display site passwords easily on a mobile app instead of having to lock the UI a few seconds for every password."

Finally, I considered covering Blum's Mental Hash (also covered here and elsewhere). This consists of an algorithm that can basically be ran by the human brain directly. It's not for the faint of heart, however: if I understand it correctly, it will require remembering a password that is basically a string of 26 digits, plus compute modulo arithmetics on the outputs. Needless to say, most people don't do modulo arithmetics every day...

Guido Günther: Debian Fun in February 2017

2 March, 2017 - 17:15
Debian LTS

February marked the 22nd month I contributed to Debian LTS under the Freexian umbrella. I had 8 hours allocated which I used by:

  • the 2nd half of a LTS frontdesk week
  • an update to lts-cve-triage.py so we don't ignore undetermined issues anymore
  • testing the bind9 update prepared by Thorsten Alteholz
  • testing of apache2 packages prepared by Jonas Meurer and Antoine Beaupré
  • triaging of QEMU CVEs and fixing most if them resulting in DLA-842-1
Other Debian stuff
  • libvirt and gtk-vnc uploads to fix CVEs in unstable and stretch
  • A git-buildpackage upload to unstable to unbreak importing large histories with import-dsc
  • Some CSS improvements for git-buildpackage to (hopefully) make the manual easier to read.
Some other Free Software activities

Nothing exciting, just some minor fixes at several places:

Martín Ferrari: Prometheus in Jessie(bpo) and Stretch

2 March, 2017 - 14:31

Just over 2 years ago, I started packaging Prometheus for Debian. It was a daunting task, mainly because the Golang ecosystem is so young, almost none of its dependencies were packaged, and upstream practices are very different from Debian's.

Today I want to announce that this is complete.

The main part of the Prometheus stack is available in Debian testing, which very soon will become the Stretch release:

These are available for a bunch of architectures (sadly, not all of them), and are the most important building blocks to deploy Prometheus in your network.

I have also finished preparing backports for all the required dependencies, and jessie-backports has a complete Prometheus stack too!

Adding to these, the archive already has the client libraries for Go, Python, and Django; and a bunch of other exporters. Except for the Postgres exporter, most of these are going to be in the Stretch release, and I plan to prepare backports for Jessie too:

Note that not all of these have been packaged by me, luckily other Debianites are also working on Prometheus packaging!

I am confident that Prometheus is going to become one of the main monitoring tools in the near future, and I am very happy that Debian is the first distribution to offer official packages for it. If you are interested, there is still lots of work ahead. Patches, bug reports, and co-maintainers are welcome!

Comment

Hideki Yamane: Debian docker image is smaller than Oracle Linux 7

2 March, 2017 - 12:03
From Oracle Developers Blog,
We've just introduced a new base Oracle Linux 7-slim Docker image that's a minuscule 114MB. Ok, so it's not quite as small as Alpine, but it's now the smallest base image of any of the major distributions. Check out the numbers in the graph to see just how small.It's not fair, Oracle. You talked about -slim image for Oracle Linux, then you should do for other distros, too.
$ sudo docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
debian              jessie-slim         232f5cd0c765        2 days ago          80 MB
debian              jessie              978d85d02b87        2 days ago          123 MB
oraclelinux         7-slim              f005b5220b05        8 days ago          114 MBDebian's jessie-slim image is 80MB, smaller than oraclelinux 7-slim image.

And, we're going to go Debian 9 "Stretch"
debian              stretch-slim        02ee50628785        2 days ago          57.1 MB
debian              stretch             6f4d77d39d73        2 days ago          100 MBIt's smaller than Oracle's -slim image by default, and 1/2 size for -slim image. Nice, isn't it? :)

Petter Reinholdtsen: Unlimited randomness with the ChaosKey?

2 March, 2017 - 02:50

A few days ago I ordered a small batch of the ChaosKey, a small USB dongle for generating entropy created by Bdale Garbee and Keith Packard. Yesterday it arrived, and I am very happy to report that it work great! According to its designers, to get it to work out of the box, you need the Linux kernel version 4.1 or later. I tested on a Debian Stretch machine (kernel version 4.9), and there it worked just fine, increasing the available entropy very quickly. I wrote a small test oneliner to test. It first print the current entropy level, drain /dev/random, and then print the entropy level for five seconds. Here is the situation without the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
300
0+1 oppføringer inn
0+1 oppføringer ut
28 byte kopiert, 0,000264565 s, 106 kB/s
4
8
12
17
21
%

The entropy level increases by 3-4 every second. In such case any application requiring random bits (like a HTTPS enabled web server) will halt and wait for more entrpy. And here is the situation with the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
1079
0+1 oppføringer inn
0+1 oppføringer ut
104 byte kopiert, 0,000487647 s, 213 kB/s
433
1028
1031
1035
1038
%

Quite the difference. :) I bought a few more than I need, in case someone want to buy one her in Norway. :)

Neil McGovern: GNOME ED update – Week 9

2 March, 2017 - 01:38

As mentioned in my previous post, I’ll be posting regularly with an update on what I’ve been up to as the GNOME Executive Director, and highlighting some cool stuff around the project!

If you find this dull, they’re tagged with [update-post] so hopefully, you can filter them out. And dear planet.debian.org folk – if this annoys you too much I can turn the feed category to turn this off it’s not interesting enough :) However, if you like these or have any suggestions for things you’d like to see here, let me know.

Conferences

One of the areas we’ve been working on is the sponsorship brochure for GUADEC and GNOME.Asia. Big thanks to Allan Day and the Engagement team for helping out here – and I’m pleased to say it’s almost finished! In the meantime, if you or your company are interested in sponsoring us, please drop a mail to sponsors@guadec.org!

Press

A fairly lengthy and wide-ranging interview with myself has been published at cio.com. It covers a bit of my background (although mistakenly says I worked for Collabora Productivity, rather than Collabora Limited!), and looks at a few different areas on where I see GNOME and how it sits within the greater GNU/Linux movement – I cover “some uncomfortable subjects around desktop Linux”. It’s well worth a read.

Release update

The GNOME 3.24 release is happening soon! As such, the release team announced the string freeze. If you want to help out with how much has been translated into your language, then https://wiki.gnome.org/TranslationProject/JoiningTranslation is a good place to start. I’d like to give a shout out to the translation teams in particular too. Sometimes people don’t realise how much work goes into this, and it’s fantastic that we’re able to reach so many more users with our software.

Google Summer of Code

GNOME is now announced as a mentoring organisation for Google Summer of Code! There are some great ideas for Summer (Well, in the Northern hemisphere anyway) projects, so if you want to spend your time coding on Free Software, and get paid for it, why not sign up as a student?

Brett Parker: Using the Mythic Beasts IPv4 -> IPv6 Proxy for Websites on a v6 only Pi and getting the right REMOTE_ADDR

2 March, 2017 - 01:35

So, more because I was intrigued than anything else, I've got a pi3 from Mythic Beasts, they're supplied with IPv6 only connectivity and the file storage is NFS over a private v4 network. The proxy will happily redirect requests to either http or https to the Pi, but this results (without turning on the Proxy Protocol) with getting remote addresses in your logs of the proxy servers, which is not entirely useful.

So, first step first, we get our RPi and we make sure that we can login to it via ssh (I'm nearly always on a v6 connection anyways, so this was a simple case of sshing to the v6 address of the Pi). I then installed haproxy and apache2 on the Pi and went about configuring them, with apache2 I changed it to listen to localhost only and on ports 8080 and 4443, I hadn't at this point enabled the ssl module so, really, the change for 4443 didn't kick in. Here's my /etc/apache2/ports.conf file:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen [::1]:8080

<IfModule ssl_module>
       Listen [::1]:4443
</IfModule>

<IfModule mod_gnutls.c>
       Listen [::1]:4443
</IfModule>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

I then edited /etc/apache2/sites-available/000-default.conf to change the VirtualHost line to [::1]:8080.

So, with that in place, now we deploy haproxy infront of it, the basic /etc/haproxy/haproxy.cfg config is:

global
       log /dev/log    local0
       log /dev/log    local1 notice
       chroot /var/lib/haproxy
       stats socket /run/haproxy/admin.sock mode 660 level admin
       stats timeout 30s
       user haproxy
       group haproxy
       daemon

       # Default SSL material locations
       ca-base /etc/ssl/certs
       crt-base /etc/ssl/private

       # Default ciphers to use on SSL-enabled listening sockets.
       # For more information, see ciphers(1SSL). This list is from:
       #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
       ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
       ssl-default-bind-options no-sslv3

defaults
       log     global
       mode    http
       option  httplog
       option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
       errorfile 400 /etc/haproxy/errors/400.http
       errorfile 403 /etc/haproxy/errors/403.http
       errorfile 408 /etc/haproxy/errors/408.http
       errorfile 500 /etc/haproxy/errors/500.http
       errorfile 502 /etc/haproxy/errors/502.http
       errorfile 503 /etc/haproxy/errors/503.http
       errorfile 504 /etc/haproxy/errors/504.http

frontend any_http
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::80
        default_backend any_http

backend any_http
        server apache2 ::1:8080

Obviously after that you then do:

systemctl restart apache2
systemctl restart haproxy

Now you have a proxy protocol'd setup from the proxy servers, and you can still talk directly to the Pi over ipv6, you're not yet logging the right remote ips, but we're a step closer. Next enable mod_remoteip in apache2:

a2enmod remoteip

And add a file, /etc/apache2/conf-available/remoteip-logformats.conf containing:

LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" remoteip_vhost_combined

And edit the /etc/apache2/sites-available/000-default.conf to change the CustomLog line to use remoteip_vhost_combined rather than combined as the LogFormat:

CustomLog ${APACHE_LOG_DIR}/access.log remoteip_vhost_combined

Now, enable the config and restart apache2:

a2enconf remoteip-logformats
systemctl restart apache2

Now you'll get the right remote ip in the logs (cool, huh!), and, better still, the environment that gets pushed through to cgi scripts/php/whatever is now also correct.

So, you can now happily visit http://<your-pi-name>.hostedpi.com/, e.g. http://srwpi.hostedpi.com/.

Next up, you'll want something like dehydrated - I grabbed the packaged version from debian's jessie-backports repository - so that you can make yourself some nice shiny SSL certificates (why wouldn't you, after all!), once you've got dehydrated installed, you'll probably want to tweak it a bit, I have some magic extra files that I use, I also suggest getting the dehydrated-apache2 package, which just makes it all much easier too.

/etc/dehydrated/conf.d/mail.sh:

CONTACT_EMAIL="my@email.address"

/etc/dehydrated/conf.d/domainconfig.sh:

DOMAINS_D="/etc/dehydrated/domains.d"

/etc/dehydrated/domains.d/srwpi.hostedpi.com:

HOOK="/etc/dehydrated/hooks/srwpi"

/etc/dehydrated/hooks/srwpi:

#!/bin/sh
action="$1"
domain="$2"

case $action in
  deploy_cert)
    privkey="$3"
    cert="$4"
    fullchain="$5"
    chain="$6"
    cat "$privkey" "$fullchain" > /etc/ssl/private/srwpi.pem
    chmod 640 /etc/ssl/private/srwpi.pem
    ;;
  *)
    ;;
esac

/etc/dehydrated/hooks/srwpi has the execute bit set (chmod +x /etc/dehydrated/hooks/srwpi), and is really only there so that the certificate can be used easily in haproxy.

And finally the file /etc/dehydrated/domains.txt:

srwpi.hostedpi.com

Obviously, use your own pi name in there, or better yet, one of your own domain names that you've mapped to the proxies.

Run dehydrated in cron mode (it's noisy, but meh...):

dehydrated -c

That s then generated you some shiny certificates (hopefully). For now, I'll just tell you how to do it through the /etc/apache2/sites-available/default-ssl.conf file, just edit that file and change the SSLCertificateFile and SSLCertificateKeyFile to point to /var/lib/dehydrated/certs/srwpi.hostedpi.com/fullchain.pem and /var/llib/dehydrated/certs/srwpi.hostedpi.com/privkey.pem files, do the edit for the CustomLog as you did for the other default site, and change the VirtualHost to be [::1]:443 and enable the site:

a2ensite default-ssl
a2enmod ssl

And restart apache2:

systemctl restart apache2

Now time to add some bits to haproxy.cfg, usefully this is only a tiny tiny bit of extra config:

frontend any_https
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::443 ssl crt /etc/ssl/private/srwpi.pem

        default_backend any_https

backend any_https
        server apache2 ::1:4443 ssl ca-file /etc/ssl/certs/ca-certificates.crt

Restart haproxy:

systemctl restart haproxy

And we're all done! REMOTE_ADDR will appear as the correct remote address in the logs, and in the environment.

Joey Hess: removing everything from github

2 March, 2017 - 01:24

Github recently drafted an update to their Terms Of Service. The new TOS is potentially very bad for copylefted Free Software. It potentially neuters it entirely, so GPL licensed software hosted on Github has an implicit BSD-like license. I'll leave the full analysis to the lawyers, but see Thorsten's analysis.

I contacted Github about this weeks ago, and received only an anodyne response. The Free Software Foundation was also talking with them about it. It seems that Github doesn't care or has some reason to want to effectively neuter copyleft software licenses.

I am deleting my repositories from Github at this time. If you used the Github mirrors for git-annex, propellor, ikiwiki, etckeeper, myrepos, click on the links for the non-Github repos (git.joeyh.name also has mirrors). (There's an oncoming severe weather event here, so it may take some time before I get everything deleted and cleaned up.)

[Some commits to git-annex were pushed to Github this morning by an automated system, but I had NOT accepted their new TOS at that point, and explicitly do NOT give Github or anyone any rights to git-annex not granted by its GPL and AGPL licenses.]

See also: PDF of Github TOS that can be read without being forced to first accept Github's TOS

Brett Parker: Ooooooh! Shiny!

1 March, 2017 - 22:12

Yay! So, it's a year and a bit on from the last post (eeep!), and we get the news of the Psion Gemini - I wants one, that looks nice and shiny and just the right size to not be inconvenient to lug around all the time, and far better for ssh usage than the onscreen keyboard on my phone!

Junichi Uekawa: I was wondering why my Termux is so slow.

1 March, 2017 - 14:32
I was wondering why my Termux is so slow. ARM might be slow but not this slow. Maybe I have too high expectations on interactivity of a local development environment.

Paul Wise: FLOSS Activities February 2017

1 March, 2017 - 09:37
Changes Issues Review Administration
  • Debian: do the samhain dance, ask for new local contacts at one site, ask local admins to reset one machine, powercycle 2 dead machines, redirect 1 user to the support channels, redirect 1 user to a service admin, redirect 1 spam reporter to the right mechanisms, investigate mail logs for a missing bug report, ping bugs-search.d.o service admin about moving off glinka and remove data, poke cdimage-search.d.o service admin about moving off glinka, update a cron job on denis.d.o for the rename of letsencrypt.sh to dehydrated, debug planet.d.o issue and remove stray cron job lock file, check if ftp is used on a couple of security.d.o mirrors, discuss storage upgrade for LeaseWeb for snapshot.d.o/deriv.d.n/etc, investigate SSD SMART error and ignore the unknown attribute, ask 9 users to restart their processes, investigate apt-get update failure in nagios, swapoff/swapon a swap file to drain it, restart/disable some failed services, help restore the backup server, debug stretch /dev/log issue,
  • Debian QA: deploy merged PTS/tracker patches,
  • Debian wiki: answer 1 IP-blocked VPN user, pinged 1 user on IRC about their bouncing mail, disabled 4 accounts due to bouncing mail, redirect 1 person to documentation/lists, whitelist 5 email addresses, forward 1 password reset token, killed 1 spammer account, reverted 1 spammer edit,
  • Debian mentors: security upgrades, check which email a user signed up with
  • Openmoko: security upgrades, daemon restarts, reboot
Debian derivatives
  • Turned off the census cron job because it ran out of disk space
  • Update Armbian sources.list
  • Ping siduction folks about updating their sources.list
  • Start a discussion about DebConf17
  • Notify the derivatives based on jessie or older that stretch is frozen
  • Invite Rebellin Linux (again)
Sponsors

The libesedb Debian backport was sponsored by my employer. All other work was done on a volunteer basis.

Thorsten Glaser: New GitHub Terms of Service r̲e̲q̲u̲i̲r̲e̲ removing many Open Source works from it

1 March, 2017 - 07:00

The new Terms of Service of GitHub became effective today, which is quite problematic — there was a review phase, but the problems were not answered, and, while the language is somewhat changed from the draft, they became effective immediately.

Now, the new ToS are not so bad that one immediately must stop using their service for disagreement, but it’s important that certain content may no longer legally be pushed to GitHub. I’ll try to explain which is affected, and why.

I’m mostly working my way backwards through section D, as that’s where the problems I identified lie, and because this is from easier to harder.

Note that using a private repository does not help, as the same terms apply.

Anything requiring attribution (e.g. CC-BY, but also BSD, …)

Section D.7 requires the person uploading content to waive any and all attribution rights. Ostensibly “to allow basic functions like search to work”, which I can even believe, but, for a work the uploader did not create completely by themselves, they can’t grant this licence.

The CC licences are notably bad because they don’t permit sublicencing, but even so, anything requiring attribution can, in almost all cases, not “written or otherwise, created or uploaded by our Users”. This is fact, and the exceptions are few.

Anything putting conditions on the right to “use, display and perform” the work and, worse, “reproduce” (all Copyleft)

Section D.5 requires the uploader to grant all other GitHub users…

  • the right to “use, display and perform” the work (with no further restrictions attached to it) — while this (likely — I didn’t check) does not exclude the GPL, many others (I believe CC-*-SA) are affected, and…
  • the right to “reproduce your Content solely on GitHub as permitted through GitHub's functionality”, with no further restructions attached; this is a killer for, I believe, any and all licences falling into the “copyleft” category.

Note that section D.4 is similar, but granting the licence to GitHub (and their successors); while this is worded much more friendly than in the draft, this fact only makes it harder to see if it affects works in a similar way. But that doesn’t matter since D.5 is clear enough.

This means that any and all content under copyleft licences is also no longer welcome on GitHub.

Anything requiring integrity of the author’s source (e.g. LPPL)

Some licences are famous for requiring people to keep the original intact while permitting patches to be piled on top; this is actually permissible for Open Source, even though annoying, and the most common LaTeX licence is rather close to that. Section D.3 says any (partial) content can be removed — though keeping a PKZIP archive of the original is a likely workaround.

But what if I just fork something under such a licence?

Only “continuing to use GitHub” constitutes accepting the new terms. This means that repositories from people who last used GitHub before March 2017 are excluded.

Even then, the new terms likely only apply to content uploaded in March 2017 or later (note that git commit dates are unreliable, you have to actually check whether the contribution dates March 2017 or later).

And then, most people are likely unaware of the new terms. If they upload content they themselves don’t have the appropriate rights (waivers to attribution and copyleft/share-alike clauses), it’s plain illegal and also makes your upload of them or a derivate thereof no more legal.

Granted, people who, in full knowledge of the new ToS, share any “User-Generated Content” with GitHub on or after 1ˢᵗ March, 2017, and actually have the appropriate rights to do that, can do that; and if you encounter such a repository, you can fork, modify and upload that iff you also waive attribution and copyleft/share-alike rights for your portion of the upload. But — especially in the beginning — these will be few and far between (even more so taking into account that GitHub is, legally spoken, a mess, and they don’t even care about hosting only OSS / Free works).

Conclusion (Fazit)

I’ll be starting to remove any such content of mine, such as the source code mirrors of jupp, which is under the GNU GPLv1, now and will be requesting people who forked such repositories on GitHub to also remove them. This is not something I like to do but something I am required to do in order to comply with the licence granted to me by my upstream. Anything you’ve found contributed by me in the meantime is up for review; ping me if I forgot something. (mksh is likely safe, even if I hereby remind you that the attribution requirement of the BSD-style licences still applies outside of GitHub.)

(Pet peeve: why can’t I “adopt a licence” with British spelling? They seem to require oversea barbarian spelling.)

The others

Atlassian Bitbucket has similar terms (even worse actually; I looked at them to see whether I could mirror mksh there, and turns out, I can’t if I don’t want to lose most of what few rights I retain when publishing under a permissive licence). Gitlab seems to not have such, but requires you to indemnify them… YMMV. I think I’ll self-host the removed content.

Chris Lamb: Free software activities in February 2017

1 March, 2017 - 05:09

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Submitted a number of pull requests to the Django web development framework:
    • Add a --mode=unified option to the "diffsettings" management command. (#8113)
    • Fix a crash in setup_test_environment() if ALLOWED_HOSTS is a tuple. (#8101)
    • Use Python 3 "shebangs" now that the master branch is Python 3 only. (#8105)
    • URL namespacing warning should consider nested namespaces. (#8102)
  • Created an experimental patch against the Python interpreter in order to find reproducibility-related assumptions in dict handling in arbitrary Python code. (#29431)
  • Filed two issues against dh-virtualenv, a tool to package Python virtualenv environments in Debian packages:
    • Fix "upgrage-pip" typo in usage documentation. (#195)
    • Missing DH_UPGRADE_SETUPTOOLS equivalent for dh_virtualenv (#196)
  • Fixed a large number of spelling corrections in Samba, a free-software re-implementation of the Windows networking protocols.
  • Reviewed and merged a pull request by @jheld for django-slack (my library to easily post messages to the Slack group-messaging utility) to support per-message backends and channels. (#63)
  • Created a pull request for django-two-factor-auth, a complete Two-Factor Authentication (2FA) framework for projects using the Django web development framework to drop use of the @lazy_property decorator to ensure compatibility with Django 1.11. (#195)
  • Filed, triaged and eventually merged a change from @evgeni to fix an autopkgtest-related issue in travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change) travis.debian.net. (#41)
  • Submitted a pull request against social-core — a library to allow Python applications to authenticate against third-party web services such as Facebook, Twitter, etc. — to use the more-readable X if Y else Z construction over Y and X or Z. (#44)
  • Filed an issue against freezegun (a tool to make it easier to write Python tests involving times) to report that dateutils was missing from requirements.txt. (#173)
  • Submitted a pull request against the Hypothesis "QuickCheck"-like testing framework to make the build reproducible. (#440)
  • Fixed an issue reported by @davidak in trydiffoscope (a web-based version of the diffoscope in-depth and content-aware diff utility) where the maximum upload size was incorrectly calculated. (#22)
  • Created a pull request for the Mars Simulation Project to remove some embedded timestamps from the changelog.gz and mars-sim.1.gz files in order to make the build reproducible. (#24)
  • Filed a bug against the cpio archiving utility to report that the testsuite fails when run in the UTC +1300 timezone. (Thread)
  • Submitted a pull request against the "pnmixer" system-tray volume mixer in order to make the build reproducible. (#153)
  • Sent a patch to Testfixtures (a collection of helpers and mock objects that are useful when writing Python unit tests or doctests) to make the build reproducible. (#56)
  • Created a pull request for the "Cloud" Sphinx documentation theme in order to make the output reproducible. (#22)
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

(I have been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.)

This month I:

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:
    • Add a machine-readable JSON output format. (Closes: #850791).
    • Add an --exclude option. (Closes: #854783).
    • Show results from debugging packages last. (Closes: #820427).
    • Extract archive members using an auto-incrementing integer avoiding the need to sanitise filenames. (Closes: #854723).
    • Apply --max-report-size to --text output. (Closes: #851147).
    • Specify <html lang="en"> in the HTML output. (re. #849411).
  • Bug fixes:
    • Fix errors when comparing directories with non-directories. (Closes: #835641).
    • Device and RPM fallback comparisons require xxd. (Closes: #854593).
    • Fix tests that call xxd on Debian Jessie due to change of output format. (Closes: #855239).
    • Add missing Recommends for comparators. (Closes: #854655).
    • Importing submodules (ie. parent.child) will attempt to import parent. (Closes: #854670).
    • Correct logic of module_exists ensuring we correctly skip the debian.deb822 tests when python3-debian is not installed. (Closes: #854745).
    • Clean all temporary files in the signal handler thread instead of attempting to pass the exception back to the main thread. (Closes: #852013).
    • Fix behaviour of setting report maximums to zero (ie. no limit).
  • Optimisations:
    • Don't uselessly run xxd(1) on non-directories.
    • No need to track libarchive directory locations.
    • Optimise create_limited_print_func.
  • Tests:
    • When comparing two empty directories, ensure that the mtime of the directory is consistent to avoid non-deterministic failures.
    • Ensure we can at least import the "deb_fallback" and "rpm_fallback" modules.
    • Add test for symlink differing in destination.
    • Add tests for --progress, --status-fd and profiling output options as well as the Deb{Changes,Buildinfo,Dsc} and RPM fallback comparisons.
    • Add get_data and @skip_unless_module_exists test helpers.
    • Mark impossible-to-reach code to improve test coverage.

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Drop raw_text fields now as we've moved these to Amazon S3.
  • Drop storage of Installed-Build-Depends and subsequently-orphaned Binary package instances to recover diskspace.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Print log entry when fixing a file. (Closes: #777239).
  • Run our entire testsuite in autopkgtests, not just the first test. (Closes: #852517).
  • Don't test for stat(2)'s blksize and block attributes. (Closes: #854937).
  • Use error() from Dh_Lib.pm over "manual" die().


Debian Patches contributed Debian LTS

This month I have been paid to work 13 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 817-1 for libphp-phpmailer, correcting a local file disclosure vulnerability where insufficient parsing of HTML messages could potentially be used by attacker to read a local file.
  • Issued DLA 826-1 for wireshark which fixes a denial of service vulnerability in wireshark, where a malformed NATO Ground Moving Target Indicator Format ("STANAG 4607") capture file could cause a memory exhausion/infinite loop.
Uploads
  • python-django (1:1.11~beta1-1) — New upstream beta release.
  • redis (3:3.2.8-1) — New upstream release.
  • gunicorn (19.6.0-11) — Use ${misc:Pre-Depends} to populate Pre-Depends for dpkg-maintscript-helper.
  • dh-virtualenv (1.0-1~bpo8+1) — Upload to jessie-backports.

I sponsored the following uploads:

I also performed the following QA uploads:

  • dh-kpatches (0.99.36+nmu4) — Make kernel kernel builds reproducible.

Finally, I made the following non-maintainer uploads:

  • cpio (2.12+dfsg-3) — Remove rmt.8.gz to prevent a piuparts error.
  • dot-forward (1:0.71-2.2) — Correct a FTBFS; we don't install anything to /usr/sbin, so use GNU Make's $(wildcard ..) over the shell's own * expansion.
Debian bugs filed

I also filed 15 FTBFS bugs against binaryornot, chaussette, examl, ftpcopy, golang-codegangsta-cli, hiro, jarisplayer, libchado-perl, python-irc, python-stopit, python-stopit, python-stopit, python-websockets, rubocop & yash.

FTP Team

As a Debian FTP assistant I ACCEPTed 116 packages: autobahn-cpp, automat, bglibs, bitlbee, bmusb, bullet, case, certspotter, checkit-tiff, dash-el, dash-functional-el, debian-reference, el-x, elisp-bug-hunter, emacs-git-messenger, emacs-which-key, examl, genwqe-user, giac, golang-github-cloudflare-cfssl, golang-github-docker-goamz, golang-github-docker-libnetwork, golang-github-go-openapi-spec, golang-github-google-certificate-transparency, golang-github-karlseguin-ccache, golang-github-karlseguin-expect, golang-github-nebulouslabs-bolt, gpiozero, gsequencer, jel, libconfig-mvp-slicer-perl, libcrush, libdist-zilla-config-slicer-perl, libdist-zilla-role-pluginbundle-pluginremover-perl, libevent, libfunction-parameters-perl, libopenshot, libpod-weaver-section-generatesection-perl, libpodofo, libprelude, libprotocol-http2-perl, libscout, libsmali-1-java, libtest-abortable-perl, linux, linux-grsec, linux-signed, lockdown, lrslib, lua-curses, lua-torch-cutorch, mariadb-10.1, mini-buildd, mkchromecast, mocker-el, node-arr-exclude, node-brorand, node-buffer-xor, node-caller, node-duplexer3, node-ieee754, node-is-finite, node-lowercase-keys, node-minimalistic-assert, node-os-browserify, node-p-finally, node-parse-ms, node-plur, node-prepend-http, node-safe-buffer, node-text-table, node-time-zone, node-tty-browserify, node-widest-line, npd6, openoverlayrouter, pandoc-citeproc-preamble, pydenticon, pyicloud, pyroute2, pytest-qt, pytest-xvfb, python-biomaj3, python-canonicaljson, python-cgcloud, python-gffutils, python-h5netcdf, python-imageio, python-kaptan, python-libtmux, python-pybedtools, python-pyflow, python-scrapy, python-scrapy-djangoitem, python-signedjson, python-unpaddedbase64, python-xarray, qcumber, r-cran-urltools, radiant, repo, rmlint, ruby-googleauth, ruby-os, shutilwhich, sia, six, slimit, sphinx-celery, subuser, swarmkit, tmuxp, tpm2-tools, vine, wala & x265.

I additionally filed 8 RC bugs against packages that had incomplete debian/copyright files against: checkit-tiff, dash-el, dash-functional-el, libcrush, libopenshot, mkchromecast, pytest-qt & x265.

Reproducible builds folks: Reproducible Builds: week 96 in Stretch cycle

1 March, 2017 - 03:25

Here's what happened in the Reproducible Builds effort between Sunday February 19 and Saturday February 25 2017:

Reproducible work in other projects Upcoming Events

Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th.

On March 23rd Holger Levsen will give a talk at the German Unix User Group's "Frühjahrsfachgespräch" about Reproducible Builds everywhere.

Verifying Software Freedom with Reproducible Builds will be presented by Vagrant Cascadian at Libreplanet2017 in Boston, March 25th-26th.

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Reviews of unreproducible packages

9 package reviews have been added, 3 have been updated and 1 has been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris Lamb (4)
diffoscope development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • diffoscope 77 was unblocked by the release team for stretch.
  • Mattia Rizzolo uploaded 77~bpo8+1 to jessie-backports.
buildinfo.debian.net development

buildinfo.debian.net is our experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

Website development tests.reproducible-builds.org
  • Ed Maste made the upcoming FreeBSD release almost 100% reproducible (see above).
  • Holger Levsen added the number of configured and running builder jobs to the performance stats page.
  • Holger Levsen improved the scheduler, so that untested packages and versions are tried sooner.
  • Holger added logging for submitting .buildinfo files to `buildinfo.debian.net and added notification about this failure.
  • Holger also made some minor improvements to the generated HTML.
Misc.

This week's edition was written by Chris Lamb, Ed Maste & Levsen and reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Kees Cook: security things in Linux v4.10

28 February, 2017 - 13:31

Previously: v4.9.

Here’s a quick summary of some of the interesting security things in last week’s v4.10 release of the Linux kernel:

PAN emulation on arm64

Catalin Marinas introduced ARM64_SW_TTBR0_PAN, which is functionally the arm64 equivalent of arm’s CONFIG_CPU_SW_DOMAIN_PAN. While Privileged eXecute Never (PXN) has been available in ARM hardware for a while now, Privileged Access Never (PAN) will only be available in hardware once vendors start manufacturing ARMv8.1 or later CPUs. Right now, everything is still ARMv8.0, which left a bit of a gap in security flaw mitigations on ARM since CONFIG_CPU_SW_DOMAIN_PAN can only provide PAN coverage on ARMv7 systems, but nothing existed on ARMv8.0. This solves that problem and closes a common exploitation method for arm64 systems.

thread_info relocation on arm64

As done earlier for x86, Mark Rutland has moved thread_info off the kernel stack on arm64. With thread_info no longer on the stack, it’s more difficult for attackers to find it, which makes it harder to subvert the very sensitive addr_limit field.

linked list hardening
I added CONFIG_BUG_ON_DATA_CORRUPTION to restore the original CONFIG_DEBUG_LIST behavior that existed prior to v2.6.27 (9 years ago): if list metadata corruption is detected, the kernel refuses to perform the operation, rather than just WARNing and continuing with the corrupted operation anyway. Since linked list corruption (usually via heap overflows) are a common method for attackers to gain a write-what-where primitive, it’s important to stop the list add/del operation if the metadata is obviously corrupted.

seeding kernel RNG from UEFI

A problem for many architectures is finding a viable source of early boot entropy to initialize the kernel Random Number Generator. For x86, this is mainly solved with the RDRAND instruction. On ARM, however, the solutions continue to be very vendor-specific. As it turns out, UEFI is supposed to hide various vendor-specific things behind a common set of APIs. The EFI_RNG_PROTOCOL call is designed to provide entropy, but it can’t be called when the kernel is running. To get entropy into the kernel, Ard Biesheuvel created a UEFI config table (LINUX_EFI_RANDOM_SEED_TABLE_GUID) that is populated during the UEFI boot stub and fed into the kernel entropy pool during early boot.

arm64 W^X detection

As done earlier for x86, Laura Abbott implemented CONFIG_DEBUG_WX on arm64. Now any dangerous arm64 kernel memory protections will be loudly reported at boot time.

64-bit get_user() zeroing fix on arm
While the fix itself is pretty minor, I like that this bug was found through a combined improvement to the usercopy test code in lib/test_user_copy.c. Hoeun Ryu added zeroing-on-failure testing, and I expanded the get_user()/put_user() tests to include all sizes. Neither improvement alone would have found the ARM bug, but together they uncovered a typo in a corner case.

no-new-privs visible in /proc/$pid/status
This is a tiny change, but I like being able to introspect processes externally. Prior to this, I wasn’t able to trivially answer the question “is that process setting the no-new-privs flag?” To address this, I exposed the flag in /proc/$pid/status, as NoNewPrivs.

That’s all for now! Please let me know if you saw anything else you think needs to be called out. :) I’m already excited about the v4.11 merge window opening…

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Gunnar Wolf: Much belated book presentation, this Saturday

28 February, 2017 - 12:21

Once again, I'm making an announcement mainly for my local circle of friends and (gasp!) followers. For those of you over 100Km away from Mexico City, please disregard this message.

Back in July 2015, and after two years of hard work, my university finished the publishing step of my second book. This is a textbook for the subject I teach at Computer Engineering: Operating Systems Fundamentals.

The book is, from its inception, fully available online under a permissive (CC-BY) license. One of the books aimed contributions is to present a text natively written in Spanish. Besides, our goal (I coordinated a team of authors, working with two colleagues from Rosario, Argentina, and one from Cauca, Colombia) was to provide a book students can easily and legally share with no legal issues.

I have got many good reviews so far, and after teaching based on it for four years (while working on it and after its publication), I can attest the material is light enough to fit in a Bachelors level degree, while it's deep enough to make our students sweat healthily ;-)

Anyway: I have been scheduled to present the book at my university's main book show, 38 Feria Internacional del Libro del Palacio de Minería this Saturday, 2017.03.04 16:00; Salón Manuel Tolsá. What's even better: This time, I won't be preparing a speech! The book will be presented by my two very good friends, José María Serralde and Rolando Cedillo. Both of them are clever, witty, fun, and a real honor to work with. Of course, having them present our book is more than a double honor.

So, everybody who can make it: FIL Minería is always great and fun. Come share the love! Come have a book! Or, at least, have a good time and a nice chat with us!

Urvika Gola: Outreachy- Week 8 & 9 Progress

28 February, 2017 - 11:46

Working with 9-Patch Images, Adapter Classes, Layouts  in Android.

Before getting this new task I never wondered ..”How does that bubble around our chat messages wraps around the width of the text written by us??”.

The image being used as the background of our messages are called 9-Patch images.

They stretch themselves according to the text length and font size!

Android will automatically resize to accommodate the contents , like–

Source- developer.android.com

How great it would be if the clothes we wear could also work the same way.
Fit according to the body-size. I could then still wear my childhood cute dresses..

Below, are the 9-Patch image I edited. There are two set of bubble images which are different for incoming and outgoing SIP messages.

           

 

These images have to be designed a certain way and should be stored as the smallest size and leave 1px to all sides. Details are clearly explained in Android Documentation–

https://developer.android.com/guide/topics/graphics/2d-graphics.html#nine-patch

Then,  save the image by concatenating “.9” between the file name and extension.

For example if your image name is bubble.png.  Rename it to bubble.9.png

They should be stored like any other image file in res/drawable folder.

 

Benifits of using 9-patch images are that–

  1. The image proportions are set according to different screen sizes automatically.
    You don’t have to create multiple PNGs of different pixels for multiple screen sizes.
  2. They resize themselves accroding to the Text size set in the user’s phone and according to text length.

I had to modify the existing Lumicall SIP Message screen which had simple ListView as the chat message holder and replace it with 9-patch bubble images to make it more interactive

 


Elizabeth Ferdman: 12 Week Progress Update for PGP Clean Room

28 February, 2017 - 07:00

I worked on creating the whiptail and corresponding gpg scripts for 4 options for primary and/or secondary/subkey generation.

1) A “Quick” Generate Primary and Secondary Key task that only asks the user for the UID and password and creates an rsa4096 primary key, an rsa2048 secondary key and an rsa2048 laptop signing subkey.

2) A “Custom” Generate Primary and Secondary Key task that gives the user more flexibility in algo, usage and expiry, but still adheres to PGP best practices. For the primary key, the user chooses between rsa4096 key or an ECC curve, sign/cert or cert only for usage, and the expiry. For the secondary encryption key, the user also chooses between RSA and ECC, but can choose a key length between 2048 and 4096, and the expiry.

3) Generate Primary Key Only: Same as the primary key generation for #2

4) Generate a Custom Subkey: The user gets to choose between rsa<2048-4096>, dsa<2048-3072>, elg<2048-4096>, and an ECC curve, and choose the usage and expiry. The tricky part was making sure that the usage matched the algorithm. For example, DSA is only capable of sign and auth, while RSA can do sign, auth, and encrypt. ECC curves are capable of all usages, however, encrypt cannot overlap with sign/auth for any curve, even though the name of the curve is the same. So I used radio and checkboxes to make it as easy as possible for the user.

These options follow the best practices outlined at riseup and Debian Wiki pages, such as:

  • The primary key should use a strong algorithm and should only have the usages cert and/or sign.

  • Subkeys can be 2048-4096 bits, preferably RSA, DSA-2 or ECC.

  • UID shouldn’t ask for a comment

  • DSA-1024 is deprecated so I restricted DSA to a minimum of 2048.

Joey Hess: making git-annex secure in the face of SHA1 collisions

28 February, 2017 - 04:15

git-annex has never used SHA1 by default. But, there are concerns about SHA1 collisions being used to exploit git repositories in various ways. Since git-annex builds on top of git, it inherits its foundational SHA1 weaknesses. Or does it?

Interestingly, when I dug into the details, I found a way to make git-annex repositories secure from SHA1 collision attacks, as long as signed commits are used (and verified).

When git commits are signed (and verified), SHA1 collisions in commits are not a problem. And there seems to be no way to generate usefully colliding git tree objects (unless they contain really ugly binary filenames). That leaves blob objects, and when using git-annex, those are git-annex key names, which can be secured from being a vector for SHA1 collision attacks.

This needed some work on git-annex, which is now done, so look for a release in the next day or two that hardens it against SHA1 collision attacks. For details about how to use it, and more about why it avoids git's SHA1 weaknesses, see https://git-annex.branchable.com/tips/using_signed_git_commits/.

My advice is, if you are using a git repository to publish or collaborate on binary files, in which it's easy to hide SHA1 collisions, you should switch to using git-annex and signed commits.

PS: Of course, verifying gpg signatures on signed commits adds some complexity and won't always be done. It turns out that the current SHA1 known-prefix collision attack cannot be usefully used to generate colliding commit objects, although a future common-prefix collision attack might. So, even if users don't verify signed commits, I believe that repositories using git-annex for binary files will be as secure as git repositories containing binary files used to be. How-ever secure that might be..

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้