Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 53 min ago

Dirk Eddelbuettel: <!DOCTYPE html>

29 June, 2019 - 22:20
rvw 0.6.0: First release rvw 0.6.0: First release

Note: Crossposted by Ivan, James and myself.

Today Dirk Eddelbuettel, James Balamuta and Ivan Pavlov are happy to announce the first release of a reworked R interface to the Vowpal Wabbit machine learning system.

Started as a GSoC 2018 project, the new rvw package was built to give R users easier access to a variety of efficient machine learning algorithms. Key features that promote this idea and differentiate the new rvw from existing Vowpal Wabbit packages in R are:

  • A reworked interface that simplifies model manipulations (direct usage of CLI arguments is also available)
  • Support of the majority of Vowpal Wabbit learning algorithms and reductions
  • Extended data.frame converter covering different variations of Vowpal Wabbit input formats

Below is a simple example of how to use the renewed rvw’s interface:

library(rvw)
library(mlbench)   # for a dataset

# Basic data preparation
data("BreastCancer", package = "mlbench")
data_full <- BreastCancer
ind_train <- sample(1:nrow(data_full), 0.8*nrow(data_full))
data_full <- data_full[,-1]
data_full$Class <- ifelse(data_full$Class == "malignant", 1, -1)
data_train <- data_full[ind_train,]
data_test <- data_full[-ind_train,]

# Simple Vowpal Wabbit model for binary classification
vwmodel <-  vwsetup(dir = "./",
                    model = "mdl.vw",
                   option = "binary")

# Training 
vwtrain(vwmodel = test_vwmodel,
       data = data_train,
       passes = 10,
       targets = "Class")

# And testing
vw_output <- vwtest(vwmodel = test_vwmodel, data = data_test)

More information is available in the Introduction and Examples sections of the wiki.

The rvw links directly to libvw and so initially we offer a Docker container in order to ship the most up to date package with everything needed.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russell Coker: Long-term Device Use

29 June, 2019 - 18:37

It seems to me that Android phones have recently passed the stage where hardware advances are well ahead of software bloat. This is the point that desktop PCs passed about 15 years ago and laptops passed about 8 years ago. For just over 15 years I’ve been avoiding buying desktop PCs, the hardware that organisations I work for throw out is good enough that I don’t need to. For the last 8 years I’ve been avoiding buying new laptops, instead buying refurbished or second hand ones which are more than adequate for my needs. Now it seems that Android phones have reached the same stage of development.

3 years ago I purchased my last phone, a Nexus 6P [1]. Then 18 months ago I got a Huawei Mate 9 as a warranty replacement [2] (I had swapped phones with my wife so the phone I was using which broke was less than a year old). The Nexus 6P had been working quite well for me until it stopped booting, but I was happy to have something a little newer and faster to replace it at no extra cost.

Prior to the Nexus 6P I had a Samsung Galaxy Note 3 for 1 year 9 months which was a personal record for owning a phone and not wanting to replace it. I was quite happy with the Note 3 until the day I fell on top of it and cracked the screen (it would have been ok if I had just dropped it). While the Note 3 still has my personal record for continuous phone use, the Nexus 6P/Huawei Mate 9 have the record for going without paying for a new phone.

A few days ago when browsing the Kogan web site I saw a refurbished Mate 10 Pro on sale for about $380. That’s not much money (I usually have spent $500+ on each phone) and while the Mate 9 is still going strong the Mate 10 is a little faster and has more RAM. The extra RAM is important to me as I have problems with Android killing apps when I don’t want it to. Also the IP67 protection will be a handy feature. So that phone should be delivered to me soon.

Some phones are getting ridiculously expensive nowadays (who wants to walk around with a $1000+ Pixel?) but it seems that the slightly lower end models are more than adequate and the older versions are still good.

Cost Summary

If I can buy a refurbished or old model phone every 2 years for under $400 that will make using a phone cost about $0.50 per day. The Nexus 6P cost me $704 in June 2016 which means that for the past 3 years my phone cost was about $0.62 per day.

It seems that laptops tend to last me about 4 years [3], and I don’t need high-end models (I even used one from a rubbish pile for a while). The last laptops I bought cost me $289 for a Thinkpad X1 Carbon [4] and $306 for the Thinkpad T420 [5]. That makes laptops about $0.20 per day.

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet for $579. That is still working very well for me today, apart from only having 32G of internal storage space and an OS update preventing Android apps from writing to the micro SD card (so I have to use USB to copy TV shows on to it) there’s nothing more than I need from a tablet. Strangely I even get good battery life out of it, I can use it for a couple of hours without the battery running out. Battery life isn’t nearly as good as when it was new, but it’s still OK for my needs. As Samsung stopped providing security updates I can’t use the tablet as a SSH client, but now that my primary laptop is a small and light model that’s less of an issue. Currently that tablet has cost me just over $0.30 per day and it’s still working well.

Currently it seems that my hardware expense for the forseeable future is likely to be about $1 per day. 20 cents for laptop, 30 cents for tablet, and 50 cents for phone. The overall expense is about $1.66 per month as I’m on a $20 per month pre-paid plan with Aldi Mobile.

Saving Money

A laptop is very important to me, the amounts of money that I’m spending don’t reflect that. But it seems that I don’t have any option for spending more on a laptop (the Thinkpad X1 Carbon I have now is just great and there’s no real option for getting more utility by spending more). I also don’t have any option to spend less on a tablet, 5 years is a great lifetime for a device that is practically impossible to repair (repair will cost a significant portion of the replacement cost).

I hope that the Mate 10 can last at least 2 years which will make it a new record for low cost of ownership of a phone for me. If app vendors can refrain from making their bloated software take 50% more RAM in the next 2 years that should be achievable.

The surprising thing I learned while writing this post is that my mobile phone expense is the largest of all my expenses related to mobile computing. Given that I want to get good reception in remote areas (needs to be Telstra or another company that uses their network) and that I need at least 3GB of data transfer per month it doesn’t seem that I have any options for reducing that cost.

Related posts:

  1. A Long Term Review of Android Devices Xperia X10 My first Android device was The Sony Ericsson...
  2. Huawei Mate9 Warranty Etc I recently got a Huawei Mate 9 phone....
  3. Android Device Service Life In recent years Android devices have been the most expensive...

Gunnar Wolf: Updates from Raspberrypi-land

29 June, 2019 - 12:06

Yay!

I was feeling sad and depressed because it's already late June... And I had not had enough time to get the unofficial Debian Buster Raspberry preview images booting on the entry-level models of the family (Broadcom 2835-based Raspberries 1A, 1B, 0 and 0W). But, this morning I found a very interesting pull request open in our GitHub repository!

Dispatched some piled-up work, and set an image build. Some minutes later, I had a shiny image, raspi0w.tar.gz. Quickly fired up dd to prepare an SD card. Searched for my RPi0w under too many papers until I found it. Connected to my trusty little monitor, and...

So, as a spoiler for my DebConf talk... Yes! We have (apparent, maybe still a bit incomplete) true Debian-plus-the-boot-blob, straight-Buster support for the whole Raspberry Pi family all of the raspberries sold until last month (yeah, the RPi4 is probably not yet supported — the kernel does not yet have a Device Tree for it. But it should be fixed soon, hopefully!)

AttachmentSize IMG_20190628_235934.1500.jpg580.38 KB IMG_20190629_000257.1500.jpg420.63 KB

Bits from Debian: Diversity and inclusion in Debian: small actions and large impacts

29 June, 2019 - 05:40

The Debian Project always has and always will welcome contributions from people who are willing to work on a constructive level with each other, without discrimination.

The Diversity Statement and the Code of Conduct are genuinely important parts of our community, and over recent years some other things have been done to make it clear that they aren't just words.

One of those things is the creation of the Debian Diversity Team: it was announced in April 2019, although it had already been working for several months before as a welcoming space for, and a way of increasing visibility of, underrepresented groups within the Debian project.

During DebConf19 in Curitiba there will be a dedicated Diversity and Welcoming Team. It will consist of people from the Debian community to offer a contact point when you feel lost or uneasy. The DebConf team is also in contact with a local LGBTIQA+ support group for exchange of safety concerns and information with respect to Brazil in general.

Today Debian also recognizes the impact LGBTIQA+ people have had in the world and within the Debian project, joining the worldwide Pride celebrations. We show it by changing our logo for this time to the Debian Diversity logo, and encourage all Debian members and contributors to show their support of a diverse and inclusive community.

Daniel Kahn Gillmor: Community Impact of OpenPGP Certificate Flooding

29 June, 2019 - 02:00
Community Impact of OpenPGP Certificate Flooding

I wrote yesterday about a recent OpenPGP certificate flooding attack, what I think it means for the ecosystem, and how it impacted me. This is a brief followup, trying to zoom out a bit and think about why it affected me emotionally the way that it did.

One of the reasons this situation makes me sad is not just that it's more breakage that needs cleaning up, or even that my personal identity certificate was on the receiving end. It's that it has impacted (and will continue impacting at least in the short term) many different people -- friends and colleagues -- who I know and care about. It's not just that they may be the next targets of such a flooding attack if we don't fix things, although that's certainly possible. What gets me is that they were affected because they know me and communicate with me. They had my certificate in their keyring, or in some mutually-maintained system, and as a result of what we know to be good practice -- regular keyring refresh -- they got burned.

Of course, they didn't get actually, physically burned. But from several conversations i've had over the last 24 hours, i know personally at least a half-dozen different people who i personally know have lost hours of work, being stymied by the failing tools, some of that time spent confused and anxious and frustrated. Some of them thought they might have lost access to their encrypted e-mail messages entirely. Others were struggling to wrestle a suddenly non-responsive machine back into order. These are all good people doing other interesting work that I want to succeed, and I can't give them those hours back, or relieve them of that stress retroactively.

One of the points I've been driving at for years is that the goals of much of the work I care about (confidentiality; privacy; information security and data sovereignty; healthy communications systems) are not individual goods. They are interdependent, communally-constructed and communally-defended social properties.

As an engineering community, we failed -- and as an engineer, I contributed to that failure -- at protecting these folks in this instance about because we left things sloppy and broken and supposedly "good enough".

Fortunately, this failure isn't the worst situation. There's no arbitrary code execution, no permanent data loss (unless people get panicked and delete everything), no accidental broadcast of secrets that shouldn't be leaked.

And as much as this is a community failure, there are also communities of people who have recognized these problems and have been working to solve them. So I'm pretty happy that good people have been working on infrastructure that saw this coming, and were preparing for it, even if their tools haven't been as fully implemented (or as widely adopted) as they should be yet. Those projects include:

  • Autocrypt -- which avoids any interaction with the keyserver network in favor of in-band key discovery.

  • Web Key Directory or WKD, which maps e-mail addresses to a user-controlled publication space for their OpenPGP Keys.

  • DANE OPENPGPKEY which lets a domain owner publish their user's minimal OpenPGP certificates in the DNS directly.

  • Hagrid, the implementation behind the keys.openpgp.org keyserver, which presents the opportunity for a updates-only interface as well as a place for people to publish their certificates if their domain controller doesn't support WKD or DANE OPENPGPKEY. Hagrid is also an excellent first public showing for the Sequoia project, a Rust-based implementation of the OpenPGP standards that hopefully we can build more tooling on top of in the years to come.

Let's keep pushing these community-driven approaches forward and get the ecosystem to a healthier place.

Mike Gabriel: List Open Files for a Running Application/Service

28 June, 2019 - 14:03

This is merely a little reminder to myself:

for pid in `ps -C <process-name> -o pid=`; do ls -l "/proc/$pid/fd"; done

On Linux, this returns a list of file handles being held open by all instances of <process-name>.

Daniel Kahn Gillmor: OpenPGP Certificate Flooding

28 June, 2019 - 11:00
OpenPGP Certificate Flooding

My public cryptographic identity has been spammed to the point where it is unusable in standard workflows. This blogpost talks about what happened, what I'm doing about it, and what it means for the broader ecosystem.

If you work with me and you use OpenPGP certificates to do so, the crucial things you should know are:

  • Do not refresh my OpenPGP certificate from the SKS keyserver network.

  • Use a constrained keyserver like keys.openpgp.org if you want to check my certificate for updates like revocation, expiration, or subkey rollover.

  • Use an Autocrypt-capable e-mail client, WKD, or direct download from my server to find my certificate in the first place.

  • If you have already fetched my certificate in the last week, and it is bloated, or your GnuPG instance is horribly slow as a result, you probably want to delete it and then recover it via one of the channels described above.

What Happened?

Some time in the last few weeks, my OpenPGP certificate, 0xC4BC2DDB38CCE96485EBE9C2F20691179038E5C6 was flooded with bogus certifications which were uploaded to the SKS keyserver network.

SKS is known to be vulnerable to this kind of Certificate Flooding, and is difficult to address due to the synchronization mechanism of the SKS pool. (SKS's synchronization assumes that all keyservers have the same set of filters). You can see discussion about this problem from a year ago along with earlier proposals for how to mitigate it. But none of those proposals have quite come to fruition, and people are still reliant on the SKS network.

Previous Instances of Certificate Flooding

We've seen various forms of certificate flooding before, including spam on Werner Koch's key over a year ago, and abuse tools made available years ago under the name "trollwot". There's even a keyserver-backed filesystem proposed as a proof of concept to point out the abuse.

There was even a discussion a few months ago about how the SKS keyserver network is dying.

So none of this is a novel or surprising problem. However, the scale of spam attached to certificates recently appears to be unprecedented. It's not just mine: Robert J, Hansen's certificate has also been spammed into oblivion as well. The signature spam on Werner's certificate, for example is "only" about 5K signatures (a total of < 1MiB), whereas the signature spam attached to mine is more like 55K signatures for a total of 17MiB, and rjh's is more than double that.

What Problems does Certificate Flooding Cause?

The fact that my certificate is flooded quite this badly provides an opportunity to see what breaks. I've been filing bug reports and profiling problems over the last day.

GnuPG can't even import my certificate from the keyservers any more in the common case. This also has implications for ensuring that revocations are discovered, or new subkeys rotated, as described in that ticket.

In the situations where it's possible to have imported the large certificate, gpg exhibits severe performance problems for even basic operations over the keyring.

This causes Enigmail to become unusable if it encounters a flooded certificate.

It also causes problems for monkeysphere-authentication if it encounters a flooded certificate.

There are probably more! If you find other problems for tools that deal with these sort of flooded certs, please report bugs appropriately.

Dealing with Certificate Flooding

What can we do about this? Months ago, i wrote a draft about abuse-resistant keystores that outlined these problems and what we need from a keyserver.

Use Abuse-Resistant Keystores to Refresh Certificates

If the purpose of refreshing your certificate is to find key material updates and revocations, we need to use an abuse-resistant keyserver or network of keyservers for that.

Fortunately, keys.openpgp.org is just such a service, and it was recently launched. It seems to work! It can distribute revocations and subkey rollovers automatically, even if you don't have a user ID for the certificate. You can do this by putting the following line in ~/.gnupg/dirmngr.conf

keyserver hkps://keys.openpgp.org

and ensure that there is no keyserver entry at all in ~/.gnupg/gpg.conf.

This keyserver doesn't distribute third-party certifications at all, though. And if the owner of the e-mail address hasn't confirmed with the operators of keys.openpgp.org that they want that keyserver to distribute their certificate, it won't even distribute the certificate's user IDs.

Fix GnuPG to Import certificate updates even without User IDs

Unfortunately, GnuPG doesn't cope well with importing minimalist certificates. I've applied patches for this in debian experimental (and they're documented in debian as #930665), but those fixes are not yet adopted upstream, or widely deployed elsewhere.

In-band Certificate Discovery

Refreshing certificates is only part of the role that keyserver networks play. Another is just finding OpenPGP certificates in the first place.

The best way to find a certificate is if someone just gives it to you in the context that it makes sense.

The Autocrypt project is an example of this pattern for e-mail messages. If you can adopt an Autocrypt-capable e-mail client, you should, since that will avoid needing to search for keys at all when dealing with e-mail. Unfortunately, those implementations are also not widely available yet.

Certificate Lookup via WKD or DANE

If you're looking up an OpenPGP certificate by e-mail address, you should try looking it up via some mechanism where the address owner (or at least the domain owner) can publish the record. The current best examples of this are WKD and DANE's OPENPGPKEY DNS records. Modern versions of GnuPG support both of these methods. See the auto-key-locate documentation in gpg(1).

Conclusion

This is a mess, and it's a mess a long time coming. The parts of the OpenPGP ecosystem that rely on the naive assumptions of the SKS keyserver can no longer be relied on, because people are deliberately abusing those keyservers. We need significantly more defensive programming, and a better set of protocols for thinking about how and when to retrieve OpenPGP certificates.

A Personal Postscript

I've spent a significant amount of time over the years trying to push the ecosystem into a more responsible posture with respect to OpenPGP certificates, and have clearly not been as successful at it or as fast as I wanted to be. Complex ecosystems can take time to move.

To have my own certificate directly spammed in this way felt surprisingly personal, as though someone was trying to attack or punish me, specifically. I can't know whether that's actually the case, of course, nor do i really want to. And the fact that Robert J. Hansen's certificate was also spammed makes me feel a little less like a singular or unique target, but I also don't feel particularly proud of feeling relieved that someone else is also being "punished" in addition to me.

But this report wouldn't be complete if I didn't mention that I've felt disheartened and demotivated by this situation. I'm a stubborn person, and I'm trying to make the best of the situation by being constructive about at least documenting the places that are most severely broken by this. But I've also found myself tempted to walk away from this ecosystem entirely because of this incident. I don't want to be too dramatic about this, but whoever did this basically experimented on me (and Robert) directly, and it's a pretty shitty thing to do.

If you're reading this, and you set this off, and you selected me specifically because of my role in the OpenPGP ecosystem, or because I wrote the abuse-resistant-keystore draft, or because I'm part of the Autocrypt project, then you should know that I care about making this stuff work for people. If you'd reached out to me to describe what you were planning to do, we could have done all of the above bug reporting and triage using demonstration certificates, and worked on it together. I would have happily helped. I still might! But because of the way this was done, I'm not feeling particularly happy right now. I hope that someone is, somewhere.

Daniel Kahn Gillmor: OpenPGP Certificate Flooding

28 June, 2019 - 11:00
OpenPGP Certificate Flooding

My public cryptographic identity has been spammed to the point where it is unusable in standard workflows. This blogpost talks about what happened, what I'm doing about it, and what it means for the broader ecosystem.

If you work with me and you use OpenPGP certificates to do so, the crucial things you should know are:

  • Do not refresh my OpenPGP certificate from the SKS keyserver network.

  • Use a constrained keyserver like keys.openpgp.org if you want to check my certificate for updates like revocation, expiration, or subkey rollover.

  • Use an Autocrypt-capable e-mail client, WKD, or direct download from my server to find my certificate in the first place.

  • If you have already fetched my certificate in the last week, and it is bloaated, or your GnuPG instance is horribly slow as a result, you probably want to delete it and then recover it via one of the channels described above.

What Happened?

Some time in the last few weeks, my OpenPGP certificate, 0xC4BC2DDB38CCE96485EBE9C2F20691179038E5C6 was flooded with bogus certifications which were uploaded to the SKS keyserver network.

SKS is known to be vulnerable to this kind of Certificate Flooding, and is difficult to address due to the synchronization mechanism of the SKS pool. (SKS's synchronization assumes that all keyservers have the same set of filters). You can see discussion about this problem from a year ago along with earlier proposals for how to mitigate it. But none of those proposals have quite come to fruition, and people are still reliant on the SKS network.

Previous Instances of Certificate Flooding

We've seen various forms of certificate flooding before, including spam on Werner Koch's key over a year ago, and abuse tools made available years ago under the name "trollwot". There's even a keyserver-backed filesystem proposed as a proof of concept to point out the abuse.

There was even a discussion a few months ago about how the SKS keyserver network is dying.

So none of this is a novel or surprising problem. However, the scale of spam attached to certificates recently appears to be unprecedented. It's not just mine: Robert J, Hansen's certificate has also been spammed into oblivion as well. The signature spam on Werner's certificate, for example is "only" about 5K signatures (a total of < 1MiB), whereas the signature spam attached to mine is more like 55K signatures for a total of 17MiB, and rjh's is more than double that.

What Problems does Certificate Flooding Cause?

The fact that my certificate is flooded quite this badly provides an opportunity to see what breaks. I've been filing bug reports and profiling problems over the last day.

GnuPG can't even import my certificate from the keyservers any more in the common case. This also has implications for ensuring that revocations are discovered, or new subkeys rotated, as described in that ticket.

In the situations where it's possible to have imported the large certificate, gpg exhibits severe performance problems for even basic operations over the keyring.

This causes Enigmail to become unusable if it encounters a flooded certificate.

It also causes problems for monkeysphere-authentication if it encounters a flooded certificate.

There are probably more! If you find other problems for tools that deal with these sort of flooded certs, please report bugs appropriately.

Dealing with Certificate Flooding

What can we do about this? Months ago, i wrote a draft about abuse-resistant keystores that outlined these problems and what we need from a keyserver.

Use Abuse-Resistant Keystores to Refresh Certificates

If the purpose of refreshing your certificate is to find key material updates and revocations, we need to use an abuse-resistant keyserver or network of keyservers for that.

Fortunately, keys.openpgp.org is just such a service, and it was recently launched. It seems to work! It can distribute revocations and subkey rollovers automatically, even if you don't have a user ID for the certificate. You can do this by putting the following line in ~/.gnupg/dirmngr.conf

keyserver hkps://keys.openpgp.org

and ensure that there is no keyserver entry at all in ~/.gnupg/gpg.conf.

This keyserver doesn't distribute third-party certifications at all, though. And if the owner of the e-mail address hasn't confirmed with the operators of keys.openpgp.org that they want that keyserver to distribute their certificate

Fix GnuPG to Import certificate updates even without User IDs

Unfortunately, GnuPG doesn't cope well with importing minimalist certificates. I've applied patches for this in debian experimental (and they're documented in debian as #930665, but those fixes are not yet adopted upstream, or widely deployed elsewhere.

In-band Certificate Discovery

Refreshing certificates is only part of the role that keyserver networks play. Another is just finding OpenPGP certificates in the first place.

The best way to find a certificate is if someone just gives it to you in the context that it makes sense.

The Autocrypt project is an example of this pattern for e-mail messages. If you can adopt an Autocrypt-capable e-mail client, you should, since that will avoid needing to search for keys at all when dealing with e-mail. Unfortunately, those implementations are also not widely available yet.

Certificate Lookup via WKD or DANE

If you're looking up an OpenPGP certificate by e-mail address, you should try looking it up via some mechanism where the address owner (or at least the domain owner) can publish the record. The current best examples of this are WKD and DANE's OPENPGPKEY DNS records. Modern versions of GnuPG support both of these methods. See the auto-key-locate documentation in gpg(1).

Conclusion

This is a mess, and it's a mess a long time coming. The parts of the OpenPGP ecosystem that rely on the naive assumptions of the SKS keyserver can no longer be relied on, because people are deliberately abusing those keyservers. We need significantly more defensive programming, and a better set of protocols for thinking about how and when to retrieve OpenPGP certificates.

A Personal Postscript

I've spent a significant amount of time over the years trying to push the ecosystem into a more responsible posture with respect to OpenPGP certificates, and have clearly not been as successful at it or as fast as I wanted to be. Complex ecosystems can take time to move.

To have my own certificate directly spammed in this way felt surprisingly personal, as though someone was trying to attack or punish me, specifically. I can't know whether that's actually the case, of course, nor do i really want to. And the fact that Robert J. Hansen's certificate was also spammed makes me feel a little less like a singular or unique target, but I also don't feel particularly pround of feeling relieved that someone else is also being "punished" in addition to me.

But this report wouldn't be complete if I didn't mention that i've felt disheartened and demotivated by this situation. I'm a stubborn person, and I'm trying to make the best of the situation by being constructive about at least documenting the places that are most severely broken by this. But I've also found myself tempted to walk away from this ecosytsem entirely because of this incident. I don't want to be too dramatic about this, but whoever did this basically experimented on me (and Robert) directly, and it's a pretty shitty thing to do.

If you're reading this, and you set this off, and you selected me specifically because of my role in the OpenPGP ecosystem, or because I wrote the abuse-resistant-keystore draft, or because I'm part of the Autocrypt project, then you should know that I care about making this stuff work for people. If you'd reached out to me to describe what you were planning to do, we could have done all of the above bug reporting and triage using demonstration certificates, and worked on it together. I would have happily helped. I still might! But because of the way this was done, I'm not feeling particularly happy right now. I hope that someone is, somewhere.

Kees Cook: package hardening asymptote

28 June, 2019 - 05:35

Forever ago I set up tooling to generate graphs representing the adoption of various hardening features in Ubuntu packaging. These were very interesting in 2006 when stack protector was making its way into the package archive. Similarly in 2008 and 2009 as FORTIFY_SOURCE and read-only relocations made their way through the archive. It took a while to really catch hold, but finally PIE-by-default started to take off in 2016 through 2018:

Around 2012 when Debian started in earnest to enable hardening features for their archive, I realized this was going to be a long road. I added the above “20 year view” for Ubuntu and then started similarly graphing hardening features in Debian packages too (the blip on PIE here was a tooling glitch, IIRC):

Today I realized that my Ubuntu tooling broke back in January and no one noticed, including me. And really, why should anyone notice? The “near term” (weekly, monthly) graphs have been basically flat for years:

In the long-term view the measurements have a distinctly asymptotic appearance and the graphs are maybe only good for their historical curves now. But then I wonder, what’s next? What new compiler feature adoption could be measured? I think there are still a few good candidates…

How about enabling -fstack-clash-protection (only in GCC, Clang still hasn’t implemented it).

Or how about getting serious and using forward-edge Control Flow Integrity? (Clang has -fsanitize=cfi for general purpose function prototype based enforcement, and GCC has the more limited -fvtable-verify for C++ objects.)

Where is backward-edge CFI? (Is everyone waiting for CET?)

Does anyone see something meaningful that needs adoption tracking?

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Matrix on Debian blog: June 2019 Matrix on Debian update

27 June, 2019 - 02:36

This is an update on the state of Matrix-related software in Debian.

Synapse

Unfortunately, the recently published Synapse 1.0 didn’t make it into Debian Buster, which is due to be released next week, so if you install 0.99.2 from Buster, you need to update to a newer version which will be available from backports shortly after the release.

Originally, 0.99 was meant to be the last version before 1.0, but due to a bunch of issues discovered since then, some of them security-related, new incompatible room format was introduced in 0.99.5. This means 0.99.2 currently in Debian Buster is going to only see limited usefulness, since rooms are being upgraded to the new format as 1.0 is being deployed across the network.

For those of you running forever unstable Sid, good news: Synapse 1.0 is now available in unstable! ACME support has not yet been enabled, since it requires a few packages not yet in Debian (they’re currently in the NEW queue). We hope it will be available soon after Buster is released.

Quaternion

Quaternion 0.0.9.4 is being packaged by Hubert Chathi, soon to be uploaded. Hubert has already updated and uploaded libqmatrixclient and olm, which are waiting in NEW.

Circle

There’s been some progress on packaging Circle, a modular IRC client with Matrix support. The backend and IRC support have been available for some time in Debian already, but to be useful, it also needs a user-interfacing front-end. The GTK 2 front-end has just been uploaded to Debian, as have the necessary Perl modules for Matrix support. All of the said packages are now being reviewed in the NEW queue.

Fractal

Early in June, Andrej Shadura looked into packaging Fractal, but found a few crates being of incompatible versions in Debian compared to what upstream expects. A current blocker is a pending release of rust-phf.

Get in touch

Come chat to us in #debian-matrix:matrix.org!

Sam Hartman: AH/DAM/DPL Meet Up

26 June, 2019 - 21:42
All the members of the Antiharassment team met with the Debian Account Managers and the DPL in that other Cambridge— the one with proper behaviour, not the one where pounds are weight and not money.

I was nervous. I was not part of decision making earlier this year around code of conduct issues. I was worried that my concerns would be taken as insensitive judgment applied by someone who wasn’t there.

I was worried about whether I would find my values aligned with the others. I care about treating people with respect. I also care about freedom of expression. I value a lot of feminist principles and fighting oppression. Yet I’m happy with my masculinity. I acknowledge my privilege and have some understanding of the inequities in the world. Yet I find some arguments based on privilege problematic and find almost all uses of the phrase “check your privilege” to be dismissive and to deny any attempt at building empathy and understanding.

And Joerg was there. He can be amazingly compassionate and helpful. He can also be gruff at times. He values brevity, which I’m not good at. I was bracing myself for a sharp, brief, gruff rebuke delivered in response to my feedback. I know there would be something compassionate under such a rebuke, but it might take work to find.

The meeting was hard; we were talking about emotionally intense issues. But it was also wonderful. We made huge progress. This blog is not about reporting that progress.

Like the other Debian meetings I’ve been at, I felt like I was part of something wonderful. We sat around and described the problems we were working on. They were social not technical. We brainstormed solutions, talked about what worked, what didn’t work. We disagreed. We listened to each other. We made progress.

Listening to the discussions on debian-private in December and January, it sounded like DAM and Antiharassment thought they had it all together. I got a note asking if I had any suggestions for how things could have been done better. I kind of felt like they were being polite and asking since I had offered support.

Yet I know now that they were struggling as much as any of us struggle with a thorny RC bug that crosses multiple teams and packages. The account managers tried to invent suspensions in response to what was going on. They wanted to take a stand against bullying and disrespectful behavior. But they didn’t want to drive away contributors; they wanted to find a way to let people know that a real problem required immediate attention. Existing tools were inadequate. So they invented account suspensions. It was buggy. And when your social problem solving tools are buggy, people get hurt.

But I didn’t find myself facing off against that mythical group of people sure in their own actions I had half imagined. I found myself sitting around a table with members of my community, more alike than different. They had insecurities just like I do. They doubted themselves. I’m sure there was some extent to which they felt it was the project against them in December and January. But they also felt some of that pain that raged across debian-private. They didn’t think they had the answers, and they wanted to work with all of us to find them.

I found a group of people who genuinely care about openness and expressing dissenting views. The triggers for action were about how views were expressed not about those views. The biggest way to get under DAM’s skin and get them started thinking about whether there is a membership issue appears to be declining to engage constructively when someone wants to talk to you about a problem. In contrast, even if something has gone horribly wrong trying to engage constructively is likely to get you the support of all of us around that table in finding a way to meet your needs as well as the greater project.

Fear over language didn’t get in our way. People sometimes made errors about using someone’s preferred pronouns. It wasn’t a big deal: when they noticed they corrected themselves, acknowledged that they cared about the issue and went on with life. There was cursing sometimes and some really strong feelings.

There was even a sex joke. Someone talked about sucking and someone else intentionally misinterpreted it in a sexual context. But people payed attention to the boundaries of others. I couldn’t have gotten away with telling that joke: I didn’t know the people well enough to know their boundaries. It is not that I’m worried I’ll offend. It is that I actively want to respect the others around me. One way I can do that is to understand their boundaries and respect them.

One joke did cross a line. With a series of looks and semi-verbal communication, we realized that was probably a bit too far for that group while we were meeting. The person telling the joke acknowledged and we moved on.

I was reassured that we all care about the balance that allows Debian to work. We bring the same dedication to creating the universal operating system that we do to building our community. With sufficient practice we’ll be really good at the community work. I’m excited!

Jonathan McDowell: Support your local Hackerspace

26 June, 2019 - 20:43

My first Hackerspace was Noisebridge. It was full of smart and interesting people and I never felt like I belonged, but I had just moved to San Francisco and it had interesting events, like 5MoF, and provided access to basic stuff I hadn’t moved with me, like a soldering iron. While I was never a heavy user of the space I very much appreciated its presence, and availability even to non-members. People were generally welcoming, it was a well stocked space and there was always something going on.

These days my local hackerspace is Farset Labs. I don’t have a need for tooling in the same way, being lucky enough to have space at home and access to all the things I didn’t move to the US, but it’s still a space full of smart and interesting people that has interesting events. And mostly that’s how I make use of the space - I attend events there. It’s one of many venues in Belfast that are part of the regular Meetup scene, and for a while I was just another meetup attendee. A couple of things changed the way I looked at. Firstly, for whatever reason, I have more of a sense of belonging. It could be because the tech scene in Belfast is small enough that you’ll bump into the same people at wildly different events, but I think that’s true of the tech scene in most places. Secondly, I had the realisation (and this is obvious once you say it, but still) that Farset was the only non-commercial venue that was hosting these events. It’s predominantly funded by members fees; it’s not getting Invest NI or government subsidies (though I believe Weavers Court is a pretty supportive landlord).

So I became a member. It then took me several months after signing up to actually be in the space again, but I feel it’s the right thing to do; without the support of their local tech communities hackerspaces can’t thrive. I’m probably in Farset at most once a month, but I’d miss it if it wasn’t there. Plus I don’t want to see such a valuable resource disappear from the Belfast scene.

And that would be my message to you, dear reader. Support your local hackerspace. Become a member if you can afford it, donate what you can if not, or just show up and help out - as non-commercial entities things generally happen as a result of people turning up and volunteering their time to help out.

(This post prompted by a bunch of Small Charity Week tweets last week singing the praises of Farset, alongside the announcement today that Farset Labs is expanding - if you use the space and have been considering becoming a member or even just donating, now is the time to do it.)

Jonathan Carter: PeerTube and LBRY

26 June, 2019 - 02:14

I have many problems with YouTube, who doesn’t these days, right? I’m not going to go into all the nitty gritty of it in this post, but here’s a video from a LBRY advocate that does a good job of summarizing some of the issues by using clips from YouTube creators:

(link to the video if the embedded video above doesn’t display)

I have a channel on YouTube for which I have lots of plans for. I started making videos last year and created 59 episodes for Debian Package of the Day. I’m proud that I got so far because I tend to lose interest in things after I figure out how it works or how to do it. I suppose some people have assumed that my video channel is dead because I haven’t uploaded recently, but I’ve just been really busy and in recent weeks, also a bit tired as a result. Things should pick up again soon.

Mediadrop and PeerTube

I wanted to avoid a reliance on YouTube early on, and set up a mediadrop instance on highvoltage.tv. Mediadrop ticks quite a few boxes but there’s a lot that’s missing. On top of that, it doesn’t seem to be actively developed anymore so it will probably never get the features that I want.

Screenshot of my MediaDrop instance.

I’ve been planning to move over to PeerTube for a while and hope to complete that soon. PeerTube is a free software video hosting platform that resemble YouTube style video sites. It’s on the fediverse and videos viewed by users are shared by webtorrents to other users who are viewing the same videos. After reviewing different video hosting platforms last year during DebCamp, I also came to the conclusion that PeerTube is the right platform to host DebConf and related Debian videos on. I intend to implement an instance for Debian shortly after I finish up my own migration.

(link to PeerTube video if embedded video doesn’t display)

Above is an introduction of PeerTube by its creators (which runs on PeerTube so if you’ve never tried it out before, there’s your chance!)

LBRY

LBRY App Screenshot

LBRY takes a drastically different approach to the video sharing problem. It’s not yet as polished as PeerTube in terms of user experience and it’s a lot newer too, but it’s interesting in its own right. It’s also free software and implements it’s own protocol that you access on lbry:// URIs and it prioritizes it’s own native apps over accessing it in a web browser. Videos are also shared on its peer-to-peer network. One big thing that it implements is its own blockchain along with its own LBC currency (don’t roll your eyes just yet it’s not some gimmick from 2016 ;) ). It’s integrated with the app so viewers can easily give a tip to a creator. I think that’s better than YouTube’s ad approach because people can earn money by the value their video provides to the user, not by the amount of eyes they bring to the platform. It’s also possible for creators to create paid for content, although I haven’t seen that on the platform yet.

If you try out LBRY using my referral code I can get a whole 20 LBC (1 LBC is nearly USD $0.04 so I’ll be rich soon!). They also have a sync system that can sync all your existing YouTube videos over to LBRY. I requested this yesterday and it’s scheduled so at some point my YouTube videos will show up on my @highvoltage channel on LBRY. Their roadmap also includes some interesting reading.

I definitely intend to try out LBRY’s features and it’s unique approach, although for now my plan is to use my upcoming PeerTube instance as my main platform. It’s the most stable and viable long-term option at this point and covers all the important features that I care about.

Ian Jackson: Important walking/cycle link closed, poor diversion, terrible signage

25 June, 2019 - 22:50
Highways England have decided to close the busway track under the A14 in Cambridge. Initially now for 2 weeks, but we hear rumours of an 8-week closure later in the year.

Problems:

  • The first thing that many people seem to have known about this is finding their usual route to work blocked: there has been no communication, let alone consultation.
  • There is no signage about the blockage except right next to it. This is bad, because this is a very important link that affects routes for miles around.
  • Even when you get up to the blockage, the signs are rather poor.
  • The diversionary route is too long. For a pedestrian, a 15 minute walk is turned into a half hour diversion, making many previously easy journeys hopelessly impractical (for example, Cambridge Regional College to Histon high street).
  • The diversionary route is very poorly signed
  • The diversionary route involves crossing a big road without any useful traffic lights.
  • The diversionary route is very bitty: many many traffic lights, junctions, corners, and so on, which is a nuisance for anyone cycling.
Summary of the diversionOverall, the extra distance is 2.7km and has an additional 9 traffic lights. One is out use meaning you have to take your life in your hands crossing at a junction where the motor vehicles have a green light.

The extra time for a pedestrian is probably about 30 minutes, replacing a 15 minute walk. For a cyclist, a pleasant 4-5 minute link, covering most of the distance between the Science Park and Histon, becomes a 15 minute odyssey.

References: From Histon, in pictures Read more... )



comments

Dirk Eddelbuettel: RcppTOML 0.1.6: Tinytest support and more robustification

25 June, 2019 - 19:14

A new RcppTOML release is now on CRAN. RcppTOML brings TOML to R.

TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML – though sadly these may be too ubiquitous now. TOML has been making inroads with projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka “packages”) for the Rust language.

Václav Hausenblas sent a number of excellent and very focused PRs helping with some input format corner cases, as well as with one test. We added support for the wonderful new tinytest package. The detailed list of changes in this incremental version is below.

Changes in version 0.1.6 (2019-06-25)
  • Propagate the escape switch to calls of getTable() and getArray() (Václav Hausenblas in #32 completing #26). Hausenblas in #36 completing #26)

  • The README.md file now mentions TOML v0.5.0 support (Watal Iwasaki in #35 addressing #33).

  • Encodings in arrays is to UTF-8 for character (Václav Hausenblas in #36 completing #28)

  • The package now use tinytest (Dirk in #38 fixing #37, also Václav in #39).

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppTOML page page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้