Date: Wed, 12 Nov 2014 08:11:53 +0000
From: "Andrew M.A. Cater"
Subject: Debian RT - new key for amacater
User-Agent: Mutt/1.5.23 (2014-03-12)
-----BEGIN PGP SIGNED MESSAGE-----
pub 1024D/E93ADE7B 2001-07-04
Key fingerprint = F3FA 2752 1327 7904 846D C0DE 3233 C127 E93A DE7B
uid Andrew Cater (Andrew M.A. Cater)
sub 1024g/E8C8CC00 2001-07-04
pub 4096R/22EF1F0F 2014-08-29
Key fingerprint = 5596 5E39 93E0 6E2B 5BA5 CD84 4AA8 FC24 22EF 1F0F
uid Andrew M.A. Cater (Andy Cater)
uid Andrew M.A. Cater (non-Debian email)
sub 4096R/923AB77E 2014-08-29
This is intended to replace the old key by the new key as part of a key transition from old, insecure keys
All the best,
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
This because Google (and Planet Debian) are more reliable than my email inbox.
[Keys exchanged at the mini-Debconf have now been signed with the new 4096 bit key]
Some of you may already be aware of the gift tag which has been used for a while to indicate bugs which are suitable for new contributors to use as an entry point to working on specific packages. Unfortunately, some of us (including me!) were unaware that this tag even existed.
Luckily, Lucas Nussbaum clued me in to the existence of this tag, and after a brief bike-shed-naming thread, and some voting using pocket_devotee we decided to name the new tag newcomer, and I have now added this tag to the BTS documentation, and tagged all of the bugs which were user tagged "gift" with this tag.
If you have bugs in your package which you think are ideal for new contributors to Debian (or your package) to fix, please tag them newcomer. If you're getting started in Debian, and working on bugs to fix, please search for the newcomer tag, grab the helm, and contribute to Debian.
I'm glad to announce that Virginia King has been selected as one of the three interns for this round of the FOSS Outreach Program for women. Starting December 9th, and continuing until March 9th, she'll be working on improving the documentation of Debian's bug tracking system.
The initial goal is to develop a Bug Triager Howto to help new contributors to Debian jump in and help existing teams triage bugs. We'll be getting in touch with some of the larger teams in Debian to help make this document as useful as possible. If you're a member of a team in Debian who would like this howto to address your specific workflow, please drop me an e-mail, and we'll keep you in the loop.
The secondary goals for this project are to:
- Improve documentation under http://www.debian.org/Bugs
- Document of bug-tags and categories
- Improve upstream debbugs documentation
I know I promised better stats, but meh... Next week :(
As you can see, there's been a bit of a mass-filing going on. and that pushed ys above Wheezy's count for week 46.
My own personal favourite bug is, of course, this one.
The UDD bugs interface currently knows about the following release critical bugs:
- In Total:
218 bugs affecting
- Affecting Jessie:
427 (key packages:
175) That's the number we need to get down to zero
before the release. They can be split in two big categories:
- Affecting Jessie and unstable:
313 (key packages:
131) Those need someone to find a fix, or to finish
the work to upload a fix to unstable:
- 33 bugs are tagged 'patch'. (key packages: 15) Please help by reviewing the patches, and (if you are a DD) by uploading them.
- 12 bugs are marked as done, but still affect unstable. (key packages: 6) This can happen due to missing builds on some architectures, for example. Help investigate!
- 268 bugs are neither tagged patch, nor marked done. (key packages: 110) Help make a first step towards resolution!
- Affecting Jessie only: 114 (key packages: 44) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
- Affecting Jessie and unstable: 313 (key packages: 131) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
- Affecting Jessie: 427 (key packages: 175) That's the number we need to get down to zero before the release. They can be split in two big categories:
How do we compare to the Squeeze release cycle?Week Squeeze Wheezy Diff 43 284 (213+71) 468 (332+136) +184 (+119/+65) 44 261 (201+60) 408 (265+143) +147 (+64/+83) 45 261 (205+56) 425 (291+134) +164 (+86/+78) 46 271 (200+71) 401 (258+143) +130 (+58/+72) 47 283 (209+74) 366 (221+145) +83 (+12/+71) 48 256 (177+79) 378 (230+148) +122 (+53/+69) 49 256 (180+76) 360 (216+155) +104 (+36/+79) 50 204 (148+56) 339 (195+144) +135 (+47/+90) 51 178 (124+54) 323 (190+133) +145 (+66/+79) 52 115 (78+37) 289 (190+99) +174 (+112/+62) 1 93 (60+33) 287 (171+116) +194 (+111/+83) 2 82 (46+36) 271 (162+109) +189 (+116/+73) 3 25 (15+10) 249 (165+84) +224 (+150/+74) 4 14 (8+6) 244 (176+68) +230 (+168/+62) 5 2 (0+2) 224 (132+92) +222 (+132/+90) 6 release! 212 (129+83) +212 (+129/+83) 7 release+1 194 (128+66) +194 (+128/+66) 8 release+2 206 (144+62) +206 (+144/+62) 9 release+3 174 (105+69) +174 (+105/+69) 10 release+4 120 (72+48) +120 (+72/+48) 11 release+5 115 (74+41) +115 (+74/+41) 12 release+6 93 (47+46) +93 (+47/+46) 13 release+7 50 (24+26) +50 (+24/+26) 14 release+8 51 (32+19) +51 (+32/+19) 15 release+9 39 (32+7) +39 (+32/+7) 16 release+10 20 (12+8) +20 (+12/+8) 17 release+11 24 (19+5) +24 (+19/+5) 18 release+12 2 (2+0) +2 (+2/+0)
The version number of debian-med metapackages was bumped to 1.99 as a signal that we plan to release version 2.0 with Jessie. As usual the metapackages will be recreated shortly before the final release to include potential changes in the package pool. Feel free to install the metapackages med-* with the package installer of your choice.
As always you can have a look at the packages in our focus by visiting our tasks pages. Please note that there may be new packages that aren’t ready for release and that won’t be installed by using the current metapackages. This is because we don’t stop packaging software when the current testing is in freeze.Some support for Hospital Information Systems
This release contains, for the first time some support for Hospital Information Systems (HIS) with the dependency fis-gtm of the med-his metapackage. This was made possible due to the work of Luis Ibanez (at kitware at the time when working on the packaging) and Amul Shah (fisglobal). Thanks to a fruitful cooperation between upstream FIS and Debian the build system of fis-gtm was adapted to enable an easier packaging.
The availability of fis-gtm will simplify running Vista-foia on Debian systems and we are finally working on packaging Vista as well to make Debian fit for running inside hospitals.
There was some interesting work done by Emilien Klein who was working hard to get GNUHealthpackaged. Emilien has given a detailed explanation on the Debian Med mailing list giving reasons why he removed the existing packages from the Debian package pool again. While this is a shame for GNUHealth users there might be an opportunity to revive this effort if there was better coordination between upstream and Tryton (which is the framework GNUHealth is based upon). In any case the packaging code in SVN as a useful resource to base private packages on. Feel free to contact us via the Debian Med mailing list if you consider creating GNUHealth Debian packages.Packages moved from non-free to main
The Debian Med team worked hard to finally enable DFSG free licenses for PHYLIPand other package based on this tool. PHYLIP is well known in bioinformatics and actually one of the first packages in this field inside Debian (oldest changelog entry 28 Aug 1998). Since then it was considered non-free because its use was restricted to scientific / non-commercial use and also has the condition that you need to pay a fee to the University of Washington if you intend to use it commercially.
Since Debian Med was started we were in continuous discussion with the author Joe Felsenstein. We even started an online petition to show how large the interest in a DFSG free PHYLIP might be. As a side note: This petition was *not* presented to the authors since they happily decided to move to a free license because of previous discussion and since they realised that the money they "gained" over they years was only minimal. The petition is mentioned here to demonstrate that it is possible to gather support to see positive changes implemented that benefit all users and that this approach can be used for similar cases.
So finally PHYLIP was released in September under a BSD-2-clause license and in turn SeaView (a similarly famous program and also long term non-free citizen) depending on PHYLIP code was freed as well. There are several other tools like python-biopython and python-cogent which are calling PHYLIP if it exists. So not only is PHYLIP freed we can now stop removing those parts of the test suites of these other tools that are using PHYLIP.
Thanks to all who participated in freeing PHYLIP specifically its author Joe Felsenstein.Autopkgtest in Debian Med packages
We tried hard to add autopkgtests to all packages where some upstream test suite exists and we also tried to create some tests on our own. Since we consider testing of scientific software a very important feature this work was highly focused on for the Jessie release. When doing so we were able to drastically enhance the reliability of packages and found new formerly hidden dependency relations. Perhaps the hardest work was to run the full test suite of python-biopython which also has uncovered some hidden bugs in the upstream code on architectures that are not so frequently used in the field of bioinformatics. This was made possible by the very good support of upstream who were very helpful in solving the issues we reported.
However, we are not at 100% coverage of autopkgtest and we will keep on working on our packages in the next release cycle for Jessie+1.General quality assurance
A general inspection of all Debian Med packages was done to check all packages which were uploaded before the Wheezy release and never touched since then. Those packages where checked for changed upstream locations which might have been hidden from uscan and in some cases new upstream releases were spotted by doing this investigation. Other old packages were re-uploaded conforming to current policy and packaging tools also polishing lintian issues.Publication with Debian Med involvement
The Debian Med team is involved in a paper which is in BioMed Central (in press). The title will be "Community-driven development for computational biology at Sprints, Hackathons and Codefests"Updated team metrics
The team metrics graphs on the Debian Med Blend entry page were updated. At the bottom you will find a 3D Bar chart of dependencies of selected metapackages over different versions. It shows our continuous work in several fields. Thanks to all Debian Med team members for their rigorous work on our common goal to make Debian the best operating system for medicine and biology.
Please note that VCS stat calculation is currently broken and does not reflect the latest commits this year.Blends installable via d-i?
In bug #758116 it is requested to list all Blends and thus also Debian Med in the initial tasksel selection. This would solve a long term open issue which was addessed more than eleven years ago (in #186085) in a more general and better way. This would add a frequently requested feature by our users who always wonder how to install Debian Med.
While there is no final decision on bug #758116 and we are quite late with the request to get this implemented in Jessie feel free to contribute ideas so that this selection of Blends can be done in the best possible manner.Debian Med Bug Squashing Advent Calendar 2014
The Debian Med team will again do the Bug Squashing Advent Calendar. Feel free to join us in our bug squashing effort where we close bugs while other people are opening doors. :-)
I've been crap about blogging lately. Let's see if I can fix that.
Back in February and March, Jo and I went on vacation for 2 weeks touring California and Nevada. We had an awesome time and we got to see and do lots of fun stuff. I'm not going to go into all the details, as it's a long time ago now...! I've got a massive set of photos online, though.
However, two things struck me as odd when we were there. I'm travelling quite regularly to the US these days due to my work in Linaro, but these still seemed new when I saw them this February/March. These are, admittedly trivial things, but they really stood out for me. Maybe I'm a little weird? :-)
Curved shower curtain tracks
I guess I'm not the only one who's been annoyed by shower curtains sticking to me in the shower, but I'd not really paid much thought to it until now. Suddenly, as of maybe 18 months ago I'm seeing most hotel bathrooms replacing the straight curtain track with a curved one, to stop that happening. This photo shows that process with both tracks visible...
Waterproof/washable TV remotes
This one really surprised me. As Jo will attest, I have a little bit of an obsession with TVs and set-top boxes in hotel rooms. This dates from my time working for Amino where we made set-top boxes, and I got into the habit of checking what products were in the hotels I stayed in. I've seen a range of weird and wonderful setups over the years, but never this one before. In two of the hotels on our trip, they had replaced the normal TV remotes with washable/wipe-down ones. Weird...
Alongside our originally planned general sprint work on Thursday and Friday, the Release Team had their own sprint to work through remaining post-freeze decisions about policies, architectures, naming etc. The rest of us worked through a range of topics all over Debian: installer, admin, ARM arch support etc. We even managed to fix Andy's laptop :-).
Saturday and Sunday were two days devoted to more traditional conference sessions. Our talks covered a wide range of topics: d-i progress and an update from the Release Team, backup software and an arm64 laptop project to name but a few.
I was very happy that the Release Team announced Zurg"Stretch" and "Buster" as upcoming release names, and obviously it was lovely to have arm64 and ppc64el confirmed as release architectures for Jessie. It's a shame to see the kfreebsd ports dropped from the official Jessie list, but let's see if the porters can do an unofficial release anyway. I'll help if I can with things like CD builds...
Several volunteers from the DebConf video team were on hand too, so our talks were recorded and are already online at http://meetings-archive.debian.net/pub/debian-meetings/2014/mini-debconf-cambridge/webm/. Yay!
Again, the mini-conf went well and feedback from attendees was universally positive. We may run again next year. More importantly, I can confirm that we're definitely planning on bidding to host a full DebConf in Cambridge in the summer of 2017.
The presentation abstract tried to explain this:
A software project that is developed by more than a single person starts requiring more than just the source code. From revision control systems through to continuous integration and issue tracking, all these services need deploying and maintaining.
This presentation takes a look at what a services a project ought to have, what options exist to fulfil those requirements and a practical look at an open source projects actual implementation.I presented on Sunday morning but got a good audience and I am told I was not completely dreadful. The talk was recorded and is publicly available along with all the rest of the conference presentations.
Unfortunately due to other issues in my life right now I did not prepare well enough in advance and my slide deck was only completed on Saturday so I was rather less familiar with the material than I would have preferred.
The rest of the conference was excellent and I managed to see many of the presentations on a good variety of topics without an overwhelming attention to Debian issues. My youngest son brought himself along on both days and "helped" with front desk. He was also the only walk out in my presentation, he insists it was just because he "did not understand a single thing I was saying" but perhaps he just knew who the designated driver was.
I would like thank everyone who organised and sponsored this event for an enjoyable weekend and look forward to the next one.
I left Debian. I don't really have a lot to say about why, but I do want to clear one thing up right away. It's not about systemd.
As far as systemd goes, I agree with my friend John Goerzen:
I promise you – 18 years from now, it will not matter what init Debian chose in 2014. It will probably barely matter in 3 years.
And with Jonathan Corbet:
However things turn out, if it becomes clear that there is a better solution than systemd available, we will be able to move to it.
I have no problem with trying out a piece of Free Software, that might have abrasive authors, all kinds of technical warts, a debatable design, scope creep etc. None of that stopped me from giving Linux a try in 1995, and I'm glad I jumped in with both feet.
It's important to be unafraid to make a decision, try it out, and if it doesn't work, be unafraid to iterate, rethink, or throw a bad choice out. That's how progress happens. Free Software empowers us to do this.
Debian used to be a lot better at that than it is now. This seems to have less to do with the size of the project, and more to do with the project having aged, ossified, and become comfortable with increasing layers of complexity around how it makes decisions. To the point that I no longer feel I can understand the decision-making process at all ... or at least, that I'd rather be spending those scarce brain cycles on understanding something equally hard but more useful, like category theory.
It's been a long time since Debian was my main focus; I feel much more useful when I'm working in a small nimble project, making fast and loose decisions and iterating on them. Recent events brought it to a head, but this is not a new feeling. I've been less and less involved in Debian since 2007, when I dropped maintaining any packages I wasn't the upstream author of, and took a year of mostly ignoring the larger project.
Now I've made the shift from being a Debian developer to being an upstream author of stuff in Debian (and other distros). It seems best to make a clean break rather than hang around and risk being sucked back in.
My mailbox has been amazing over the past week by the way. I've heard from so many friends, and it's been very sad but also beautiful.
DebConf15 will take place in Heidelberg, Germany in August 2015. We strive to provide an intense working environment and enable good progress for Debian and for Free Software in general. We extend an invitation to everyone to join us and to support this event. As a volunteer-run non-profit conference, we depend on our sponsors.
Nine companies have already committed to sponsor DebConf15! Let's introduce them:
Google (the search engine and advertising company), Fairsight Security, Inc. (developers of real-time passive DNS solutions), Martin Alfke / Buero 2.0 (Linux & UNIX Consultant and Trainer, LPIC-2/Puppet Certified Professional) and Ubuntu (the OS supported by Canonical) are our three Silver sponsors.
Would you like to become a sponsor? Do you know of or work in a company or organization that may consider sponsorship?
Please have a look at our sponsorship brochure (also available in German), in which we outline all the details and describe the sponsor benefits. For instance, sponsors have the option to reach out to Debian contributors, derivative developers, upstream authors and other community members during a Job Fair and through postings on our job wall, and to show-case their Free Software involvement by staffing a booth on the Open Weekend. In addition, sponsors are able to distribute marketing materials in the attendee bags. And it goes without saying that we honour your sponsorship with visibility of your logo in the conference's videos, on our website, on printed materials, and banners.
The final report of DebConf14 is also available, illustrating the broad spectrum, quality, and enthusiasm of the community at work, and providing detailed information about the different outcomes that last conference brought up (talks, participants, social events, impact in the Debian project and the free software scene, and much more).
Please note that this is not meant to be systemd-bashing, just a criticism base one a counter-example refutation of Josselin's implication that there is no use case better covered by SysV init: this is false, as there is at least one. And yes, there are probably many cases better covered by systemd, I am making no claims about that.A use case better covered by SysV init: encrypted block devices
So, waiting for a use case better covered by SysV init? Rejoice, you will not die waiting, here is one: encrypted block devices. That case works just fine with SysV init, without any specific configuration, whereas systemd just sucks at it. There exist a way to make it work², but:
- if systemd requires specific configuration to handle such a case, whereas SysV init does not, that means this case is better covered by SysV init;
- that work around does not actually work.
If you know any better, I would be glad to try it. Believe me, I like the basic principles of systemd³ and I would be glad to have it working correctly on my system.Notes
- Well, it does accept comments, but marks them as span and does not show them, which is roughly equivalent. ↑
- Installing an additional piece of software, Plymouth, is supposed to make systemd work correctly with encrypted block devices. Yes, this is additional configuration, as that piece of software does not come when you install systemd, and it is not even suggested so a regular user cannot guess it. ↑
- Though I must say I hate the way it is pushed into the GNU/Linux desktop systems. ↑
My desk today looks like this:
Yep, that’s a computer. Motherboard to the right, floppy drives and CD drive stacked on top of the power supply, hard drive to the left.
And it’s an OLD computer. (I had forgotten just how loud these old power supplies are; wow.)
The point of this exercise is to read data off the floppies that I have made starting nearly 30 years ago now (wow). Many were made with DOS, some were made on a TRS-80 Color Computer II (aka CoCo 2). There are 5.25″ disks, 3.25″ disks, and all sorts of formats. Most are DOS, but the TRS-80 ones use a different physical format. Some of the data was written by Central Point Backup (from PC Tools), which squeezed more data on the disk by adding an extra sector or something, if my vague memory is working.
Reading these disks requires low-level playing with controller timing, and sometimes the original software to extract the data. It doesn’t necessarily work under Linux, and certainly doesn’t work with USB floppies or under emulation. Hence this system.
It’s a bridge. Old enough to run DOS, new enough to use an IDE drive. I can then hook up the IDE drive to a IDE-to-USB converter and copy the data off it onto my Linux system.
But this was tricky. I started the project a few years ago, but life got in the way. Getting back to it now, with the same motherboard and drive, but I just couldn’t get it to boot. I eventually began to suspect some disk geometry settings, and with some detective work from fdisk in Linux plus some research into old BIOS disk size limitations, discovered the problem was a 2GB limit. Through some educated trial and error, I programmed the BOIS with a number of cylinders that worked, set it to LBA mode, and finally my 3-year-old DOS 6.2 installation booted.
I had also forgotten how finicky things were back then. Pop a floppy from a Debian install set into the drive, type dir b:, and the system hangs. I guess there was a reason the reset button was prominent on the front of the computer back then…
It’s finally become properly autumnal, in the real world and in Debian. One week ago, I announced (on behalf of the whole release team) that Debian 8 “Jessie” had successfully frozen on time.
At 18:00 that evening we had 310 release critical bugs – that is, the number that we must reduce to 0 before the release is ready. How does that number look now?
Well, there are now 315 bugs affecting Jessie, at various stages of progression. That sounds like it’s going in the wrong direction, but considering that over a hundred new bugs were filed just 8 hours after the freeze announcement, things are actually looking pretty good.
Out of those 315 bugs, 91 have been fixed and the packages affected have already been unblocked by the release team. The fixed packages will migrate to Jessie in the next few days, if they continue to be bug-free.
Thirty-four bugs are apparently fixed in unstable but are not cleared for migration yet. That means that the release team has not spotted the fix, or nobody has told us, or the fixed package is unsuitable for some other reason (like unrelated changes in the same upload). You can help by trying to find out which reason applies, and talking to us about it. Most likely nobody has asked us to unblock it yet.
Speaking of unblocks, we currently have twenty-four requests that need to be looked at, and a further 20 which are awaiting more information from the maintainer. We already investigated and resolved 260 requests.
Our response rate is currently pretty good, but it’s unclear whether we can sustain it indefinitely. We all have day jobs, for example. One way you could help is to review the list of unchecked unblocks and gather up missing information, or look at the ones tagged moreinfo and see whether that’s still the case (maybe the maintainer replied, but forgot to remove the tag). If you’re confident, you might even try triaging some of the obvious requests and give some feedback to the maintainer, though the final decision will be made by a release team member.
After all, the quicker this goes the sooner we can release and thaw up unstable again.
Footnote: the method used to determine RC bug counts last week and this week differ, and therefore so could the margin for error. Surprisingly enough, counting bugs is not an exact science. I’m confident these numbers are close enough for broad comparison, even if they’re out by one or two.A chilly week is a post from: jwiltshire.org.uk | Flattr
Free software has been my career for a long time -- nothing else since 1999 -- and it continues to be a happy surprise each time I find a way to continue that streak.
The latest is that I'm being funded for a couple of years to work part-time on git-annex. The funding comes from the DataLad project, which was recently awarded a grant by the National Science Foundation. DataLad folks (at Dartmouth College and at Magdeburg University in Germany) are working on providing easy access to scientific data (particularly neuroimaging). So git-annex will actually be used for science!
I'm being funded for around 30 hours of work each month, to do general work on the git-annex core (not on the webapp or assistant). That includes bugfixes and some improvements that are wanted for DataLad, but are all themselves generally useful. (see issue list)
This is enough to get by on, at least in my current living situation. It would be great if I could find some funding for my other work time -- but it's also wonderful to have the flexibility to spend time on whatever other interesting projects I might want to.
In October 2014, we affected 13.75h works hours to 3 contributors:
- Thorsten Alteholz
- Raphaël Hertzog worked only 10 hours. The remaining hours will be done over November.
- Holger Levsen did nothing (for unexpected personal reasons), he will catch up in November.
Obviously, only the hours done have been paid. Should the backlog grow further, we will seek for more paid contributors (to share the workload) and to make it easier to redispatch work hours once a contributor knows that he won’t be able to handle the hours that were affected to him/her.Evolution of the situation
Compared to last month, we gained two new sponsors (Daevel and FOSSter, thanks to them!) and we have now 45.5 hours of paid LTS work to “spend” each month. That’s great but we are still far from our minimal goal of funding the equivalent of a half-time position.
In terms of security updates waiting to be handled, the situation is a bit worse than last month: while the dla-needed.txt file only lists 33 packages awaiting an update (6 less than last month), the list of open vulnerabilities in Squeeze shows about 60 affected packages in total. This differences has two explanations: CVE triaging for squeeze has not been done in the last days, and the POODLE issue(s) with SSLv3 affects a very large number of packages where it’s not always clear what the proper action is.
In any case, it’s never too late to join the growing list of sponsors and help us do a better job, please check with your company managers. If not possible for this year, consider including it in the budget for next year.Thanks to our sponsors
Let me thank our main sponsors:
- Gold sponsors:
- Silver sponsors:
- AD&D – David Ayers – IntarS Austria
- Domeneshop AS
- Trollweb Solutions
- Université Lille 3
- Bronze sponsors:
Generating data with entropy, or random number generation (RNG), is a well-known difficult problem. Many crypto algorithms and protocols assumes random data is available. There are many implementations out there, including /dev/random in the BSD and Linux kernels and API calls in crypto libraries such as GnuTLS or OpenSSL. How they work can be understood by reading the source code. The quality of the data depends on actual hardware and what entropy sources were available — the RNG implementation itself is deterministic, it merely convert data with supposed entropy from a set of data sources and then generate an output stream.
In some situations, like on virtualized environments or on small embedded systems, it is hard to find sources of sufficient quantity. Rarely are there any lower-bound estimates on how much entropy there is in the data you get. You can improve the RNG issue by using a separate hardware RNG, but there is deployment complexity in that, and from a theoretical point of view, the problem of trusting that you get good random data merely moved from one system to another. (There is more to say about hardware RNGs, I’ll save that for another day.)
For some purposes, the available solutions does not inspire enough confidence in me because of the high complexity. Complexity is often the enemy of security. In crypto discussions I have said, only half-jokingly, that about the only RNG process that I would trust is one that I can explain in simple words and implement myself with the help of pen and paper. Normally I use the example of rolling a normal six-sided dice (a D6) several times. I have been thinking about this process in more detail lately, and felt it was time to write it down, regardless of how silly it may seem.
A dice with six sides produces a random number between 1 and 6. It is relatively straight forward to intuitively convinced yourself that it is not clearly biased: inspect that it looks symmetric and do some trial rolls. By repeatedly rolling the dice, you can generate how much data you need, time permitting.
I do not understand enough thermodynamics physics to know how to estimate the amount of entropy of a physical process, so I need to resort to intuitive arguments. It would be easy to just assume that a dice produces 3 bits of entropy, because 2^3=6 which matches the number of possible outcomes. At least I find it easy to convince myself that 3 bits is the upper bound. I suspect that most dice have some form of defect, though, which leads to a very small bias that could be found with a large number of rolls. Thus I would propose that the amount of entropy of most D6’s are slightly below 3 bits on average. Further, to establish a lower bound, and intuitively, it seems easy to believe that if the entropy of particular D6 would be closer to 2 bits than to 3 bits, this would be noticeable fairly quickly by trial rolls. That assumes the dice does not have complex logic and machinery in it that would hide the patterns. With the tinfoil hat on, consider a dice with a power source and mechanics in it that allowed it to decide which number it would land on: it could generate seamingly-looking random pattern that still contained 0 bits of entropy. For example, suppose a D6 is built to produce the pattern 4, 1, 4, 2, 1, 3, 5, 6, 2, 3, 1, 3, 6, 3, 5, 6, 4, … this would mean it produces 0 bits of entropy (compare the numbers with the decimals of sqrt(2)). Other factors may also influence the amount of entropy in the output, consider if you roll the dice by just dropping straight down from 1cm/1inch above the table. With this discussion as background, and for simplicity, going forward, I will assume that my D6 produces 3 bits of entropy on every roll.
We need to figure out how many times we need to roll it. I usually find myself needing a 128-bit random number (16 bytes). Crypto algorithms and protocols typically use power-of-2 data sizes. 64 bits of entropy results in brute-force attacks requiring about 2^64 tests, and for many operations, this is feasible with today’s computing power. Performing 2^128 operations does not seem possible with today’s technology. To produce 128 bits of entropy using a D6 that produces 3 bits of entropy per roll, you need to perform ceil(128/3)=43 rolls.
We also need to design an algorithm to convert the D6 output into the resulting 128-bit random number. While it would be nice from a theoretical point of view to let each and every bit of the D6 output influence each and every bit of the 128-bit random number, this becomes difficult to do with pen and paper. For simplicity, my process will be to write the binary representation of the D6 output on paper in 3-bit chunks and then read it up as 8-bit chunks. After 8 rolls, there are 24 bits available, which can be read up as 3 distinct 8-bit numbers. So let’s do this for the D6 outputs of 3, 6, 1, 1, 2, 5, 4, 1:
3 6 1 1 2 5 4 1 011 111 001 001 010 101 010 001 01111100 10010101 01010001 124 0x7C 149 0x95 81 0x51
After 8 rolls, we have generated the 3 byte hex string “7C9551″. I repeat the process 5 more times, concatenating the strings, resulting in a hex string with 15 bytes of data. To get the last byte, I only need to roll the D6 three more times, where the two high bits of the last roll is used and the lowest bit is discarded. Let’s say the last D6 outputs were 4, 2, 3, this would result in:
4 2 3 100 010 011 10001001 137 0x89
So the 16 bytes of random data is “7C9551..89″ with “..” replaced by the 5 pieces of 3-byte chunks of data.
So what’s the next step? Depends on what you want to use the random data for. For some purposes, such as generating a high-quality 128-bit AES key, I would be done. The key is right there. To generate a high-quality ECC private key, you need to generate somewhat more randomness (matching the ECC curve size) and do a couple of EC operations. To generate a high-quality RSA private key, unfortunately you will need much more randomness, at the point where it makes more sense to implement a PRNG seeded with a strong 128-bit seed generated using this process. The latter approach is the general solution: generate 128 bits of data using the dice approach, and then seed a CSPRNG of your choice to get large number of data quickly. These steps are somewhat technical, and you lose the pen-and-paper properties, but code to implement these parts are easier to verify compared to verifying that you get good quality entropy out of your RNG implementation.
I had prepared a long and somewhat emotional blog post called "On unintended consequences" to write a rather sad bit of news off of my heart. While I believe the points raised were logical, courteous, and overall positive, I decided to do something different and replace sad things with happy things.
So anyway, for 3-4 people you will need:
- The largest, widest cooking pot you can find (you want surface to let more water evaporate)
- 500g noodles, preferably Bavette
- 300g cherry tomatoes
- ~150g sundried tomatoes
- ~150g grilled peppers
- a handful of olives
- two medium-sized red onions
- as much garlic as is socially acceptable in your group
- one or two handful of fresh basil leaves
- large gulp of olive oil
- ~100g fresh-ground Parmesan
- salt, to taste
- random source of capsaicin, to taste
Proceed to the cooky part of the evening:
- Slice and cut all vegetables into sizes of your preference; personally, I like to stay on the chunky side, but do whatever you feel like.
- Pour the olive oil into the pot; optionally add oil from your sundried tomatoes and/or grilled peppers in case those came in oil.
- Put the pot onto high heat and toss the chopped vegetables in as soon as it starts heating up.
- Stir for maybe a minute, then add a bit of water.
- Toss in the noodles and add just enough water to cover everything.
- Now is a good time to add salt and capsaicin, to taste.
- Cook everything down on medium to high heat while stirring and scraping the bottom of the pot so nothing burns. You want to get as much water out of the mix as possible.
- Towards the end, maybe a minute before the noodles are al dente, wash the basil leaves and rip them into small pieces.
- Turn off the heat, add all basil and cheese, stir a few times, and serve.
If you don't have any of those ingredients on hand and/or want to add something else: Just do so. This is not an exact science and it will taste wonderful any way you make it.
It’s actually been possible for some time, but I made that simpler recently, and I figured I should mention it.
- Grab the iceweasel source
$ apt-get source iceweasel
- Install its build dependencies
$ apt-get build-dep iceweasel
- Build it
$ cd iceweasel-* $ PRODUCT_NAME=firefox dpkg-buildpackage -rfakeroot
I’m never sure whether to post such things here, but I hope that it’s of interest to people: I’m trying to hire a top-notch Linux person for a 100% telecommute position. I’m particularly interested in people with experience managing 500 or more OS instances. It’s a shop with a lot of Debian, by the way. You can apply at that URL and mention you saw it in my blog if you’re interested.