So three weeks ago it occurred to me that it would be useful to test the reproducibility of Jessie too (until then we only tested unstable and experimental), as this will give us a nice data point to compare future developments against and also because we wanted to test "testing" in future anyway, so we could as well start while "testing" is still jessie... (because then we will have to test less once stretch development has started, as in the beginning stretch will be jessie anyway.)
Quite quickly I then realized that building all packages twice on the current, quite heavily loaded jenkins.debian.net machine, would take 4-6 weeks and that it might well happen that we'll release Jessie before this has finished. (And I do want a Jessie release rather sooner than later!)
So I've asked Profitbricks, who have been sponsoring the jenkins.debian.net setup since October 2012, for some more resources temporarily, and once again they quickly helped out and the next day I could add eight cores and 25 GB of RAM to the existing VM, for a total of 23 Cores and 75 GB RAM.
So currently jenkins.d.n runs eight reproducible build jobs simultaneously, each of them is pbuilding packages in tmpfs in RAM. Thus these builds are really fast: for small packages without additional build depends build times as low as 40 seconds can be seen, which doesn't sound impressive at first, but this includes the source download and building twice and it includes untar'ing the base.tgz twice as well as running debbindiff on the resulting binary packages. (And all this is while quite many other jobs continue to hammer the machine as well.)
(BTW: I doubt using LVM snapshots is faster then running this in tmpfs, but if you think LVM is faster I'd be curious to see some numbers.)
Interestingly this rebuild also discovered a rather unexpected result as we found an RC bug in jessie, which caused build failures for >20 packages: "#781076 cdbs: perlmodule-vars.mk LDDLFLAGS quoting problem".
And then there were some unexpected but really expectable results: testing turned out to "only" give reproducible results for 79.4% of all packages in main, while unstable has 82.6% reproducible packages. This was unexpected, as our (few) fixed packages in our repo are also used for testing testing and as testing is smaller than sid, I at first believed the reproducible percentage should be higher, as testing has 1000 packages less.
So why was the lower percentage to be expected? This is due to two factors: since FOSDEM we have introduced a number of new variations for the second build, that is we now build with a.) a different domainname, b.) with a different umask, c.) a different timezone and d.) a different kernel version (using 'linux64 --uname-2.6'). And secondly, most packages in sid haven't been rebuild since FOSDEM. Thus we'll now reschedule all packages in testing which are unreproducible in unstable, which should decrease the current reproducibility of sid a bit...
For those wondering about the 622 packages failing to build in testing: at least 453 are due to our test setup, categorized in (currently) three issues: timestamps_from_cpp_macros, ftbfs_werror_equals and ocaml_configure_not_as_root. The ones in this third issue category are rather trivial to fix, so this leaves 622-453=169 minus those fixed by #781076, so roughly 150 packeges which fail to build in our setup which need to be investigated...
And for those wondering about other missing variations, there is at least one big one missing: changing the build date. Current guestimate is that this will make 1-2000 packages unreprdoucible again but we'll only know for sure once we tested them.
So I would like to once again thank Profitbricks for supporting jenkins.d.n in the last 2.5 years and making reproducible.d.n possible so smoothly. Being able to painlessly add more resources when we needed them was incredible useful and I hope we can count on their support in the future. (I have some more ideas how to burn resources usefully in future. Stay tuned and btw, if you know how to put more hours in the day, please do tell me.
And, of course: thanks for your work on Debian, no matter whether you've been working on Jessie, reproducibility or something totally different!
To finish this post I'd like to remind everyone that currently all this is just about the prospects of reproducible builds for Debian. Debian is not 80% reproducible yet - but it easily could be! And I certainly hope it will be "soon", and hopefully "soon" will only mean a few months. We will rebuild and see.
The SFLC is hiring: idealist job posting
This is not actually an April 1 joke.
This seems as good a day as any to mention that I am a founding member of ArchiveTeam.
Way back, when Geocities was closing down, I was one of a small rag-tag group who saved a copy of most of it. That snapshot has since generated more publicity than most other projects I've worked on. I've heard many heartwarning stories of it being the only remaining copy of baby pictures and writings of deceased friends, and so on. It's even been the subject of serious academic study as outlined in this talk, which is pretty awesome.
I'm happy to let this guy be the public face of ArchiveTeam in internet meme-land. It's a 0.1% project for me, and has grown into a well-oiled machine, albeit one that shouldn't need to exist. I only get involved these days when there's another crazy internet silo fire drill and/or I'm bored.
(Rumors of me being the hand model for ArchiveTeam are, however, unsubstantiated.)
Today, Aaron Shaw and I are pleased to announce a new startup. The startup is based around an app we are building called RomancR that will bring the sharing economy directly into your bedrooms and romantic lives.
When launched, RomancR will bring the kind of market-driven convenience and efficiency that Uber has brought to ride sharing, and that AirBnB has brought to room sharing, directly into the most frustrating and inefficient domain of our personal lives. RomancR is Uber for romance and sex.
Here’s how it will work:
- Users will view profiles of nearby RomancR users that match any number of user-specified criteria for romantic matches (e.g., sexual orientation, gender, age, etc).
- When a user finds a nearby match who they are interested in meeting, they can send a request to meet in person. If they choose, users initiating these requests can attach an optional monetary donation to their request.
- When a user receives a request, they can accept or reject the request with a simple swipe to the left or right. Of course, they can take the donation offer into account when making this decision or “counter-offer” with a request for a higher donation. Larger donations will increase the likelihood of an affirmative answer.
- If a user agrees to meet in person, and if the couple then subsequently spends the night together — RomancR will measure this automatically by ensuring that the geolocation of both users’ phones match the same physical space for at least 8 hours — the donation will be transferred from the requester to the user who responded affirmatively.
- Users will be able to rate each other in ways that are similar to other sharing economy platforms.
Of course, there are many existing applications like Tinder and Grindr that help facilitate romance, dating, and hookups. Unfortunately, each of these still relies on old-fashion “intrinsic” ways of motivating people to participate in romantic endeavors. The sharing economy has shown us that systems that rely on these non-monetary motivations are ineffective and limiting! For example, many altruistic and socially-driven ride-sharing systems existed on platforms like Craigslist or Ridejoy before Uber. Similarly, volunteer-based communities like Couchsurfing and Hospitality Club existed for many years before AirBnB. None of those older systems took off in the way that their sharing economy counterparts were able to!
The reason that Uber and AirBnB exploded where previous efforts stalled is that this new generation of sharing economy startups brings the power of markets to bear on the problems they are trying to solve. Money both encourages more people to participate in providing a service and also makes it socially easier for people to take that service up without feeling like they are socially “in debt” to the person providing the service for free. The result has been more reliable and effective systems for proving rides and rooms! The reason that the sharing economy works, fundamentally, is that it has nothing to do with sharing at all! Systems that rely on people’s social desire to share without money — projects like Couchsurfing — are relics of the previous century.
RomancR, which we plan to launch later this year, will bring the power and efficiency of markets to our romantic lives. You will leave your pitiful dating life where it belongs in the dustbin of history! Go beyond antiquated non-market systems for finding lovers. Why should we rely on people’s fickle sense of taste and attractiveness, their complicated ideas of interpersonal compatibility, or their sense of altruism, when we can rely on the power of prices? With RomancR, we won’t have to!
Note: Thanks to Yochai Benkler whose example of how leaving a $100 bill on the bedside table of a person with whom you spent the night can change the nature of the a romantic interaction inspired the idea for this startup.
The event has yet to appear in the MiniDebconf agenda. Rumor has it that the hurds of fans that tend to attend his events pose a logistics and safety challenge, reason why the event might only appear in the agenda a couple of days before the event.
Lyon being such an easy place to get by train, flight, car, and even by boat, it is understandable and should be expected that a great number of people attend the minidebconf only for his very session.
If you have not yet registered, I'd recommend you to do so now and to book your travel and accommodation ASAP, before everything is overpriced due to high demand.
See you in ten days!
P.S. kudos to the organizers!
My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.Debian LTS
This month I have been paid to work 15.25 hours on Debian LTS. In that time I did the following:
- CVE triage: I pushed 37 commits to the security tracker and contacted 20 maintainers about security issues affecting their packages.
- I started a small helper script based on the new JSON output of the security tracker (see #761859 for details). It’s not ready yet but will make it easier to detect issues where the LTS team lags behind the security team, and other divergences like this and will speed up future CVE triage work (once done).
- I sent DLA-174-1 (tcpdump update fixing 3 CVE) after having received a debdiff from the Romain Françoise.
- I prepared DLA-175-1 on gnupg, fixing 3 CVE.
- I prepared DLA-180-1 on gnutls26, fixing 3 CVE.
That’s it for the paid work. But still about LTS, I proposed two events for Debconf 15:
- Inner workings of an unusual team in Debian: the Long Term Support team: a generic presentation of the team and the project;
- Preparing for Wheezy LTS: a work session between the security team and the LTS team.
In my last Freexian LTS report, I mentioned briefly that it would be nice to have a logo for the LTS project. Shortly after I got a first logo prepared by Damien Escoffier and a few more followed: they are available on a wiki page (and the logo you see above is from him!). Following a suggestion of Paul Wise, I registered the logo request on another wiki page dedicated to artwork requests. That kind of collaboration is awesome! Thanks to all the artists involved in Debian.Debian packaging
Django. This month has seen no less than 3 upstream point releases packaged for Debian (1.7.5, 1.7.6 and 1.7.7) and they have been accepted by the release team into Jessie. I’m pleased with this tolerance as I have argued the case for it multiple times in the past given the sane upstream release policy (bugfix only in a given released branch).
Python code analysis. I discovered a few months ago a tool combining the power of multiple Python code analysis tools: it’s prospector. I just filed a “Request for Package” for it (see #781165) and someone already volunteered to package it, yay \o/
update-rc.d and systemd. While working on a Kali version based on Jessie, I got hit by what boils down to a poor interaction between systemd and update-rc.d (see #746580) and after some exchanges with other affected users I raised the severity to serious as we really ought to do something about it before release. I also opened #781155 on openbsd-inetd as its usage of inetd.service instead of openbsd-inetd.service (which is only provided as a symlink to the former) leads to multiple small issues.Misc
Debian France. The general assembly is over and the new board elected its new president: it’s now official, I’m no longer Debian France’s president. Good luck to Nicolas Dandrimont who took on this responsibility.
Salt’s openssh formula. I improved salt’s openssh formula to make it possible to manage the /etc/ssh/ssh_known_hosts file referencing the public SSH keys of other managed minions.
Tendenci.com. I was looking for a free software solution to handle membership management of a large NPO and I discovered Tendenci. It looked very interesting feature wise and written with a language/framework that I enjoy (Python/Django). But while it’s free software, there’s no community at all. The company that wrote it released it under a free software license and it really looks like that they did intend to build a community but they failed at it. When I looked their “development forums” were web-based and mostly empty with only initial discussion of the current developers and no reply from anybody… there’s also no mention of an IRC channel or a mailing list. I sent them a mail to see what kind of collaboration we could expect if we opted for their software and got no reply. A pity, really.
What free software membership management solution would you use when you have more than 10000 members to handle and when you want to use the underlying database to offer SSO authentication to multiple external services?Thanks
See you next month for a new summary of my activities.
After two successful rounds in 2014, I’m helping put on another round of the Community Data Science Workshops. Last year, our 40+ volunteer mentorss taught more than 150 absolute beginners the basics of programming in Python, data collection from web APIs, and tools for data analysis and visualization and we’re still in the process of improving our curriculum and scaling up.
Once again, the workshops will be totally free of charge and open to anybody. Once again, they will be possible through the generous participation of a small army of volunteer mentors.
We’ll be meeting for four sessions over three weekends:
- Setup and Programming Tutorial (April 10 evening)
- Introduction to Programming (April 11)
- Importing Data from web APIs (April 25)
- Data Analysis and Visualization (May 9)
If you’re interested in attending, or interested in volunteering as mentor, you can go to the information and registration page for the current round of workshops and sign up before April 3rd.
It is with great pleasure and satisfaction that I release version 4.1 of Obnam, my backup program. This version includes a radically innovative approaches to data compression and de-duplication, as well as some other changes and bug fixes.
Major user-visible changes:
Obnam now recognises most common image types, and de-duplicates them by substituting a standard picture of a cat or a baby. Statistical research has shown that almost all pictures are of either cats of babies, and most people can't tell cats or babies apart. If you have other kinds of pictures, use the --naughty-pictures option to disable this new feature.
Obnam now compresses data by finding a sequence in the value of pi (3.14159...) that matches the data, and stores the offset into pi and the length of the data. This means almost all data can be stored using two BIGNUM integers, plus some computation time to compute the value of pi with necessary precision. The extreme compression level is deemed worth the somewhat slower speed. To disable this new feature, use the --i-like-big-bits-and-i-cannot-lie option.
Obnam now uses one-time pad encryption in the repository. It is a form of encryption that is guaranteed to be unbreakable. Given the large amounts of data Obnam users have, the infinitely long value of the mathematical constant e is used as the encryption pad, since it would be bad security practice to use a pad that's shorter than the data being encrypted. To disable this new feature and use the old style encryption using GnuPG, use --i-read-schneier.
Minor user-visible changes:
There is a new subcommand obnam resize-disk, which resizes the filesystem on which the backup repository resides. In this version, it works on LVM logical volumes and RAID-0, RAID-5, and RAID-6 drive arrays using mdadm. The subcommand optionally arranges more space by deleting live data files and reducing corresponding LV sizes to make more space for backups. If live data is deleted, the backup generations containing the data is tagged as un-removeable so it's not lost. In the future, the subcommand may get support for purchasing more disk space from popular online storage providers.
To reduce unnecessary bloat, the obnam restore subcommand has been removed. It was considered unnecessary, since nobody ever reported any problems with it.
Obnam now has a new repository option, --swap-in-repository, which starts a daemon process that holds all backup data in memory. Once the process grows enough, this will result in most of the data to be written to the swap partition. This makes excellent use of the excessively large swap partitions on many Linux systems. This feature does not work on Windows.
The obnam donate command to send the Obnam developers some money now again works with Bitcoin. There was a bug that broke Obnam's built-in Bitcoin mining software from working.
The obnam help command again speaks the user's preferred language (LC_MESSAGES locale setting), rather than Finnish, despite pressure from the Finnish government's office for language export.
I started working for Valve as a community manager.
Debian and FLOSS community don't only occupy coding developers. They occupy people who write news, who talk about FLOSS, who help on booths and conferences, who create artistic forms of the community and so many others that contribute in countless ways. A lady, that is doing many of that is Francesca Ciceri, known in Debian as MadameZou. She is non-packaging Debian Developer, a fearless warrior for diversity and a zombie fan. Although it sounds intimidating, she is deep caring and great human being. So, what has MadaZou to tell us?
Who are you?
My name is Francesca and I'm totally flattered by your intro. The fearless warrior part may be a bit exaggerated, though.
What have you done and what are you currently working on in FLOSS world?
I've been a Debian contributor since late 2009. My journey in Debian has touched several non-coding areas: from translation to publicity, from videoteam to www. I've been one of the www.debian.org webmasters for a while, a press officer for the Project as well as an editor for DPN. I've dabbled a bit in font packaging, and nowadays I'm mostly working as a Front Desk member.
Setup of your main machine?
Wow, that's an intimate question! Lenovo Thinkpad, Debian testing.
Describe your current most memorable situation as FLOSS member?
Oh, there are a few. One awesome, tiring and very satisfying moment was during the release of Squeeze: I was member of the publicity and the www teams at the time, and we had to pull a 10 hours of team work to put everything in place. It was terrible and exciting at the same time. I shudder to think at the amount of work required from ftpmaster and release team during the release. Another awesome moment was my first Debconf: I was so overwhelmed by the sense of belonging in finally meeting all these people I've been worked remotely for so long, and embarassed by my poor English skills, and overall happy for just being there... If you are a Debian contributor I really encourage you to participate to Debian events, be they small and local or as big as DebConf: it really is like finally meeting family.
Some memorable moments from Debian conferences?
During DC11, the late nights with the "corridor cabal" in the hotel, chatting about everything. A group expedition to watch shooting stars in the middle of nowhere, during DC13. And a very memorable videoteam session: it was my first time directing and everything that could go wrong, went wrong (including the speaker deciding to take a walk outside the room, to demonstrate something, out of the cameras range). It was a disaster, but also fun: at the end of it, all the video crew was literally in stitches. But there are many awesome moments, almost too many to recall. Each conference is precious on that regard: for me the socializing part is extremely important, it's what cements relationships and help remote work go smoothly, and gives you motivation to volunteer in tasks that sometimes are not exactly fun.
You are known as Front Desk member for DebConf's - what work does it occupy and why do you enjoy doing it?
I'm not really a member of the team: just one of Nattie's minions!
You had been also part of DebConf Video team - care to share insights into video team work and benefits it provides to Debian Project?
The video team work is extremely important: it makes possible for people not attending to follow the conference, providing both live streaming and recording of all talks. I may be biased, but I think that DebConf video coverage and the high quality of the final recordings are unrivaled among FLOSS conferences - especially since it's all volunteer work and most of us aren't professional in the field. During the conference we take shifts in filming the various talks - for each talk we need approximately 4 volunteers: two camera operators, a sound mixer and the director. After the recording, comes the boring part: reviewing, cutting and sometimes editing the videos. It's a long process and during the conference, you can sometimes spot the videoteam members doing it at night in the hacklab, exhausted after a full day of filming. And then, the videos are finally ready to be uploaded, for your viewing pleasure. During the last years this process has become faster thanks to the commitment of many volunteers, so that now you have to wait only few days, sometimes a week, after the end of the conference to be able to watch the videos. I personally love to contribute to the videoteam: you get to play with all that awesome gear and you actually make a difference for all the people who cannot attend in person.
You are also non-packaging Debian Developer - how does that feel like?
Feels awesome! The mere fact that the Debian Project decided - in 2009 via a GR - to recognize the many volunteers who contribute without doing packaging work is a great show of inclusiveness, in my opinion. In a big project like Debian just packaging software is not enough: the final result relies heavily on translators, sysadmins, webmasters, publicity people, event organizers and volunteers, graphic artists, etc. It's only fair that these contributions are deemed as valuable as the packaging, and to give an official status to those people. I was one of the firsts non-uploading DD, four years ago, and for a long time it was just really an handful of us. In the last year I've seen many others applying for the role and that makes me really happy: it means that finally the contributors have realized that they deserve to be an official part of Debian and to have "citizenship rights" in the project.
You were the leading energy on Debian's diversity statement - what gave you the energy to drive into it?
It seemed the logical conclusion of the extremely important work that Debian Women had done in the past. When I first joined Debian, in 2009, as a contributor, I was really surprised to find a friendly community and to not be discriminated on account of my gender or my lack of coding skills. I may have been just lucky, landing in particularly friendly teams, but my impression is that the project has been slowly but unequivocally changed by the work of Debian Women, who raised first the need for inclusiveness and the awareness about the gender problem in Debian. I don't remember exactly how I stumbled upon the fact that Debian didn't have a Diversity Statement, but at first I was very surprised by it. I asked zack (Stefano Zacchiroli), who was DPL at the time, and he encouraged me to start a public discussion about it, sending out a draft - and helped me all the way along the process. It took some back and forth in the debian-project mailing list, but the only thing needed was actually just someone to start the process and try to poke the discussion when it stalled - the main blocker was actually about the wording of the statement. I learned a great deal from that experience, and I think it changed completely my approach in things like online discussions and general communication within the project. At the end of the day, what I took from that is a deep respect for who participated and the realization that constructive criticism does require certainly a lot of work for all parts involved, but can happen. As for the statement in itself: these things are as good as you keep them alive with best practices, but I think that are better stated explicitly rather than being left unsaid.
You are involved also with another Front Desk, the Debian's one which is involved with Debian's New Members process - what are tasks of that FD and how rewarding is the work on it?
The Debian Front Desk is the team that runs the New Members process: we receive the applications, we assign the applicant a manager, and we verify the final report. In the last years the workflow has been simplified a lot by the re-design of the nm.debian.org website, but it's important to keep things running smoothly so that applicants don't have too lenghty processes or to wait too much before being assigned a manager. I've been doing it for a less more than a month, but it's really satisfying to usher people toward DDship! So this is how I feel everytime I send a report over to DAM for an applicant to be accepted as new Debian Developer:
How do you see future of Debian development?
Difficult to say. What I can say is that I'm pretty sure that, whatever the technical direction we'll take, Debian will remain focused on excellence and freedom.
What are your future plans in Debian, what would you like to work on?
Definetely bug wrangling: it's one of the thing I do best and I've not had a chance to do that extensively for Debian yet.
Why should developers and users join Debian community? What makes Debian a great and happy place?
We are awesome, that's why. We are strongly committed to our Social Contract and to users freedom, we are steadily improving our communication style and trying to be as inclusive as possible. Most of the people I know in Debian are perfectionists and outright brilliant in what they do. Joining Debian means working hard on something you believe, identifying with a whole project, meeting lots of wonderful people and learning new things. It ca be at times frustrating and exhausting, but it's totally worth it.
You have been involved in Mozilla as part of OPW - care to share insights into Mozilla, what have you done and compare it to Debian?
That has been a very good experience: it meant have the chance to peek into another community, learn about their tools and workflow and contribute in different ways. I was an intern for the Firefox QA team and their work span from setting up specific test and automated checks on the three version of Firefox (Stable, Aurora, Nightly) to general bug triaging. My main job was bug wrangling and I loved the fact that I was a sort of intermediary between developers and users, someone who spoke both languages and could help them work together. As for the comparison, Mozilla is surely more diverse than Debian: both in contributors and users. I'm not only talking demographic, here, but also what tools and systems are used, what kind of skills people have, etc. That meant reach some compromises with myself over little things: like having to install a proprietary tool used for the team meetings (and getting crazy in order to make it work with Debian) or communicating more on IRC than on mailing lists. But those are pretty much the challenges you have to face whenever you go out of your comfort zone .
You are also volunteer of the Organization for Transformative Works - what is it, what work do you do and care to share some interesting stuff?
OTW is a non profit organization to preserve fan history and cultures, created by fans. Its work range from legal advocacy and lobbying for fair use and copyright related issues, developing and maintaining AO3 -- a huge fanwork archive based on open-source software --, to the production of a peer-reviewed academic journal about fanworks. I'm an avid fanfiction reader and writer, and joining the OTW volunteers seemed a good way to give back to the community - in true Debian fashion . As a volunteer, I work for the Translation Committee: we are more than a hundred people - divided in several language teams - translating the OTW website, the interface of AO3 archive, newsletter, announcements and news posts. We have a orga-wide diversity statement, training for recruits, an ever growing set of procedures to smooth our workflow, monthly meetings and movie nights. It's an awesome group to work with. I'm deeply invested in this kind of work: both for the awesomeness of OTW people and for the big role that fandom and fanworks have in my life. What I find amazing is that the same concept we - as in the FLOSS ecosystem - apply to software can be applied to cultural production: taking a piece of art you love and expand, remix, explore it. Just for the fun of it. Protect and encourage the right to play in this cultural sandbox is IMO essential for our society. Most of the participants in the fandom come from marginalised group or minorities whose point of view is usually not part of the mainstream narratives. This makes the act of writing, remixing and re-interpreting a story not only a creative exercise but a revolutionary one. As Elizabeth Minkel says: "My preferred explanation is the idea that the vast majority of what we watch is from the male perspective – authored, directed, and filmed by men, and mostly straight white men at that. Fan fiction gives women and other marginalised groups the chance to subvert that perspective, to fracture a story and recast it in her own way." In other words, "fandom is about putting debate and conversation back into an artistic process".
On a personal side - you do a lot of DIY, handmade works. What have you done, what joy does it bring to you and share with us a picture of it?
I like to think that the hacker in me morphs in a maker whenever I can actually manipulate stuff. The urge to explore ways of doing things, of create and change is probably the same. I've been blessed with curiousity and craftiness and I love to learn new DIY techniques: I cannot describe it, really, but if I don't make something for a while I actually feel antsy. I need to create stuff. Nowadays, I'm mostly designing and sewing clothes - preferably reproductions of dresses from the 40s and the 50s - and I'm trying to make a living of that. It's a nice challenge: there's a lot of research involved, as I always try to be historically accurate in design, sewing tecniques and material, and many hours of careful attention to details. I'm right in the process of make photoshoots for most of my period stuff, so I'll share with you something different: a t-shirt refashion done with the DebConf11 t-shirt! (here's the tutorial)
Indeed, we settled on a release date for Jessie – and pretty quick too. I sent out a poll on the 28th of March and yesterday, it was clear that the 25th of April was our release date. :)
With that said, we still have some items left that needs to be done.
- Finishing the release notes. This is mostly pending myself and a few others.
- Translation of the release-notes. I sent out a heads up earlier today about what sections I believe to be done.
- The d-i team got another release planned as well.
- All the RC bugs you can manage to fix before the 18th of April. :)
Filed under: Debian, Release-Team
Note: this is a long overdue post. I upgraded some months ago… but I promised myself to blog about my selfhosting adventures, so here you are.
You may know the story… TL;DR
- I wanted to self host my web services.
- I bought a Microserver (N54L).
- I installed Debian stable there, RAID1 (BIOS) + cryptsetup + LVM (/ and swap, /boot in another disk, unencrypted).
- I installed GNU MediaGoblin, and it works!
- When rebooting, the password to unencrypt the disk (and then, find the LVM volumes and mount the partitions), was not accepted. But it was accepted after I shutdown, unplug the electricity, replug, and turn on the machine.
After searching a bit for information about my problem and not finding anything helpful, I began to think that maybe upgrading to Jessie could fix it (recent versions of kernel and cryptsetup…). And the Jessie freeze was almost there, and I also thought that trying to make my MediaGoblin work in Jessie now that I still didn’t upload lots of content, would be a nice idea… And, I wanted to feel the adventure!
Whatever. I decided to upgrade to Jessie. This is the glory of “free software at home”: you only waste your time (and probably not, because you can learn something, at least, what not to do).Upgrading my system to Jessie, and making it boot!
I changed sources.list, updated, did a safe-upgrade, and then upgrade. Then reboot… and the system didn’t boot.
What happened? I’m not sure, everything looked “ok” during the upgrade… But now, my system even was not asking for the passphrase to unlock the encrypted disk. It was trying to access the physical volume group as if it was in an unencrypted disk, and so, failing. The boot process left me in a “initramfs” console in which I didn’t know what to do.
I asked help from @luisgf, the system administrator of mipump.es (a Pump.io public server) and mijabber.es (an XMPP public server). We met via XMPP and with my “thinking aloud” and his patient listening and advice, we solved the problem, as you will see:
I tried to boot my rescue system (a complete system installed in different partitions in a different disk) and it booted. I tried then to manually unencrypt the encrypted disk (cryptsetup luksopen /dev/xxx), and it worked, and I could list the volume group and the volumes, and activate them, and mount the partitions. Yay! my (few) data was safe.
I rebooted and in the initramfs console I tried to do the same, but cryptsetup was not present in my initramfs.
Then I tried to boot in the old Wheezy kernel: it didn’t asked for the passphrase to unencrypt the disk, but in that initramfs console, cryptsetup was working well. So after manually unencrypt the system, activate the volumes and mount the partitions, I could exit the console and the system was booting #oleole!
So, how to tell the boot process to ask for the encryption password?
Maybe reinstalling the kernel was enough… I tried to reinstall the 3.16 kernel package. It (re)generated /boot/initrd.img-3.16.0-4-amd64 and then I restarted the system, and the problem was solved. It seems that the first time, the kernel didn’t generate the initrd image correctly, and I didn’t notice about that.
Well, problem solved. My system was booting again! No other boot problems and Jessie seemed to run perfectly. Thanks @luisgf for your help!
In addition to that, since then, my password has been accepted in every reboot, so it seems that the original problem is also gone.A note on systemd
After all the noise of last months, I was a bit afraid that any of the different services that run on my system would not start with the migration to systemd.
I had no special tweaks, just two ‘handmade’ init scripts (for MediaGoblin, and for NoIP), but I didn’t write them myself (I just searched about systemd init scripts for the corresponding services), so if it was any problem there I was not sure that I could solve it. However, everything worked fine after the migration. Thanks Debian hackers to make this transition as smooth as possible!
My MediaGoblin was not working, and I was not sure why. Maybe it was just that I need to tune nginx or whatever, after the upgrade… But I was not going to spend time trying to know what part of the stack was the culprit, and my MediaGoblin sites were almost empty… So I decided to follow again the documentation and reinstall (maybe update would be enough, who knows). I reused the Debian user(s), the PostgreSQL users and databases, and the .ini files and nginx configuration files. So it was quick, and it worked.Updating Jessie
I have updated my Jessie system several times since then (kernel updates, OpenSSL, PostgreSQL, and other security updates and RC bugs fixes, with the corresponding reboots or service restarts) and I didn’t experience the cryptsetup problem again. The system is working perfectly. I’m very happy.Using dropbear to remotely provide the cryptsetup password
The last thing I made in my home server was setting up dropbear so I can remotely provide the encryption password, and then, remotely reboot my system. I followed this guide and it worked like a charm.Some small annoyances and TODO list
- I have some warnings at boot. I think they are not important, but anyway, I post them here, and will try to figure out what do they mean:
[ 0.203617] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 0.214828] ACPI: Dynamic OEM Table Load: [ 0.214841] ACPI: OEMN 0xFFFF880074642000 000624 (v01 AMD NAHP 00000001 INTL 20051117) [ 0.226879] \_SB_:_OSC evaluation returned wrong type [ 0.226883] _OSC request data:1 1f [ 0.227055] ACPI: Interpreter enabled [ 0.227062] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S1_] (20140424/hwxface-580) [ 0.227067] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S2_] (20140424/hwxface-580) [ 0.227070] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S3_] (20140424/hwxface-580) [ 0.227083] ACPI: (supports S0 S4 S5) [ 0.227085] ACPI: Using IOAPIC for interrupt routing [ 0.227298] HEST: Table parsing has been initialized. [ 0.227301] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
And this one
[ 1.635130] ERST: Failed to get Error Log Address Range. [ 1.645802] [Firmware Warn]: GHES: Poll interval is 0 for generic hardware error source: 1, disabled. [ 1.645894] GHES: APEI firmware first mode is enabled by WHEA _OSC.
And this one, about the 250GB disk (it came with the server, it’s not in the RAID):
[ 3.320913] ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 3.321551] ata6.00: failed to enable AA (error_mask=0x1) [ 3.321593] ata6.00: ATA-8: VB0250EAVER, HPG9, max UDMA/100 [ 3.321595] ata6.00: 488397168 sectors, multi 0: LBA48 NCQ (depth 31/32) [ 3.322453] ata6.00: failed to enable AA (error_mask=0x1) [ 3.322502] ata6.00: configured for UDMA/100
- It would be nice to learn a bit about benchmarching tools and test my system with the nonfree VGA Radeon driver and without it.
- I need to setup an automated backup system…
Some people commented about the benefits of the software RAID (mainly, not to depend on a particular, proprietary firmware, what happens if my motherboard dies and I cannot find a compatible replacement?).
Currenty I have a RAID 1 (mirror) using the capabilities of the motherboard.
The problem is that, frankly, I am not sure about how to migrate the current setup (BIOS RAID + cryptsetup + LVM + partitions) to the new setup (software RAID + cryptsetup + LVM + partitions, or better other order?).
- Would it be enough to make a Clonezilla backup of each partition, wipe my current setup, boot with the Debian installer, create the new setup (software RAID, cryptsetup, LVM and partitions), and after that, stop the installation, boot with Clonezilla and restore the partition images?
- Or even better, can I (safely) remove the RAID in the BIOS, boot in my system (let’s say, from the first disk), and create the software RAID with that 2nd disk that appeared after removing the BIOS RAID (this sounds a bit like science fiction, but who knows!).
- Is it important “when” or in which “layer” do I setup the software RAID?
As you see, lots of things to read/think/try… I hope I can find time for my home server more often!Comments?
You can comment on this pump.io thread.
Filed under: My experiences and opinion Tagged: Debian, encryption, English, Free Software, libre software, MediaGoblin, Moving into free software, N54L, selfhosting, sysadmin
I previously wrote about tracking a ship around the world, but never followed up with the practical details involved with shipping my life from the San Francisco Bay Area back to Belfast. So here they are, in the hope they provide a useful data point for anyone considering a similar move.
Firstly, move out. I was in a one bedroom apartment in Fremont, CA. At the time I was leaving the US I didn’t have anywhere for my belongs to go - the hope was I’d be back in the Bay Area, but there was a reasonable chance I was going to end up in Belfast or somewhere in England. So on January 24th 2014 I had my all of my belongings moved out and put into storage, pending some information about where I might be longer term. When I say all of my belongings I mean that; I took 2 suitcases and everything else went into storage. That means all the furniture for probably a 2 bed apartment (I’d moved out of somewhere a bit larger) - the US doesn’t really seem to go in for the concept of a furnished lease the same way as the UK does.
I had deliberately picked a moving company that could handle the move out, the storage and the (potential) shipping. They handed off to a 3rd party for the far end bit, but that was to be expected. Having only one contact to deal with throughout the process really helped.
Fast forward 8 months and on September 21st I contacted my storage company to ask about getting some sort of rough shipping quote and timescales to Belfast. The estimate came back as around a 4-6 week shipping time, which was a lot faster than I was expecting. However it turned out this was the slow option. On October 27th (delay largely due to waiting for confirmation of when I’d definitely have keys on the new place) I gave the go ahead.
Container pickup (I ended up with exclusive use of a 20ft container - not quite full, but not worth part shipment) from the storage location was originally due on November 7th. Various delays at the Port of Oakland meant this didn’t happen until November 17th. It then sat in Oakland until December 2nd. At that point the ETA into Southampton was January 8th. Various other delays, including a week off the coast of LA (yay West Coast Port Backups) meant that the ship finally arrived in Southampton on January 13th. It then had to get to Belfast and clear customs. On January 22nd 2015, 2 days shy of a year since I’d seen them, my belongings and I were reunited.
So, on the face of it, the actual time on the ship was only slightly over 6 weeks, but all of the extra bits meant that the total time from “Ship it” to “I have it” was nearly 3 months. Which to be honest is more like what I was expecting. The lesson: don’t forget to factor in delays at every stage.
The relocation cost in the region of US$8000. It was more than I’d expected, but far cheaper than the cost of buying all my furniture again (plus the fact there were various things I couldn’t easily replace that were in storage). That cost didn’t cover the initial move into storage or the storage fees - it covered taking things out, packing them up for shipment and everything after that. Including delivery to a (UK) 3rd floor apartment at the far end and insurance. It’s important to note that I’d included this detail before shipment - the quote specifically mentioned it, which was useful when the local end tried to levy an additional charge for the 3rd floor aspect. They were fine once I showed them the quote as including that detail.
Getting an entire apartment worth of things I hadn’t seen in so long really did feel a bit like a second Christmas. I’d forgotten a lot of the things I had, and it was lovely to basically get a “home in a container” delivered.
Registration for R/Finance 2015 is now open!
The conference will take place on May 29 and 30, at UIC in Chicago. Building on the success of the previous conferences in 2009-2014, we expect more than 250 attendees from around the world. R users from industry, academia, and government will joining 30+ presenters covering all areas of finance with R.
We are very excited about the four keynote presentations given by Emanuel Derman, Louis Marascio, Alexander McNeil, and Rishi Narang.
The conference agenda (currently) includes 18 full presentations and 19 shorter "lightning talks". As in previous years, several (optional) pre-conference seminars are offered on Friday morning.
There is also an (optional) conference dinner at The Terrace at Trump Hotel. Overlooking the Chicago river and skyline, it is a perfect venue to continue conversations while dining and drinking.
We would to thank our 2015 sponsors for the continued support enabling us to host such an exciting conference:
On behalf of the committee and sponsors, we look forward to seeing you in Chicago!For the program committee:
Gib Bassett, Peter Carl, Dirk Eddelbuettel, Brian Peterson,
Dale Rosenthal, Jeffrey Ryan, Joshua Ulrich
See you in Chicago in May!
Konstantinos Margaritis: "Advanced Java® EE Development with WildFly" released by Packt (I was one of the reviewers!)
For the past months I had the honour and pleasure of being one of the reviewers of "Advanced Java® EE Development with WildFly" by Deepak Vohra. Today, I'm pleased to announce that the book has just been released by Packt:
It was my first time being a reviewer and it was a very interesting experience. I would like to thank the two Project Coordinators from Packt, Aboli Ambardekar and Suzanne Coutinho, who guided me with the reviewing process, so that my review would be as accurate as possible and only related to technical aspect of the book. Looking at the process retrospectively I now begin to understand the complication of achieving a balance between the author's vision for the book and the scrutiny of the (many) reviewers.
And of course I would like to thank the author, Deepak Vohra, for writing the book in the first place, I'm looking forward to reading the actual physical book :)
Here’s a puzzle I’m having trouble figuring out. This afternoon, ssh from my workstation or laptop stopped working to any of my servers (at OVH). The servers are all running wheezy, the local machines jessie. This happens on both my DSL and when tethered to my mobile phone. They had not applied any updates since the last time ssh worked. When looking at it with ssh -v, they were all hanging after:
debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr firstname.lastname@example.org none debug1: kex: client->server aes128-ctr email@example.com none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
Now, I noticed that a server on my LAN — running wheezy — could successfully connect. It was a little different:
debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
And indeed, if I run ssh -o MACs=hmac-md5, it works fine.
Now, I tried rebooting machines at multiple ends of this. No change. I tried connecting from multiple networks. No change. And then, as I was writing this blog post, all of a sudden it works normally again. Supremely weird! Any ideas what I can blame here?
Backup-manager is a tool creating backups and storing them locally. It’s really usefult to keep a regular backup of a quickly-changing trees of files (like a development environment) or for traditional backups if you have a NFS mount on your server. Backup-managers is also able to send backup itself to another server by FTP.
In order to verify the backups created by backup-manager, we will use also Backup Checker (stars appreciated :) ), the automated tool to verify backups. For each newly-created backup we want to control that:
- the directory wip/data exists
- the file wip/dump/db.sql exists and has a size greater than 100MB
- the files wip/config/accounts did not change and has a specific md5 hash sum.
We install backup-manager and backup checker. If you use Debian Wheezy, just use the following command:
apt-key adv --keyserver pgp.mit.edu --recv-keys 2B24481A \ && echo "deb http://debian.mytux.fr wheezy main" > /etc/apt/sources.list.d/mytux.list \ && apt-get update \ && apt-get install backupchecker backup-manager
Backup-manager will ask what directory you want to store backups, in our case we choose /home/joe/dev/wip
In the configuration file /etc/backup-manager.conf, you need to have the following lines:
export BM_POST_BACKUP_COMMAND="backupchecker -c /etc/backupchecker -l /var/log/backupchecker.log"Configuring Backup Checker
In order to configure Backup Checker, use the following commands:
# mkdir /etc/backupchecker && touch /var/log/backupchecker.log
Then write the following in /etc/backupchecker/backupmanager.conf:
[main] name=backupmanager type=archive path=/var/archives/laptop-home-joe-dev-wip.%Y%m%d.master.tar.gz files_list=/etc/backupchecker/backupmanager.list
You can see we’re using placeholders for the path value, in order to match each time the latest archive. More information about Backup Checker placeholders in the official documentation.
Last step, the description of your controls on the backup:
[files] wip/data| type|d wip/config/accounts| md5|27c9d75ba5a755288dbbf32f35712338 wip/dump/dump.sql| >100mbLaunch Backup Manager
Just launch the following command:
After Backup Manager is launched, Backup Checker is automatically launched and verify the new backup of the day where Backup Manager stores the backups.Possible control failures
Lets say the dump does not have the expected size. It means someone may have messed up with the database! Backup Checker will warn you with the following message in /var/log/backupchecker.log:
$ cat /var/log/backupchecker.log WARNING:root:1 file smaller than expected while checking /var/archives/laptop-home-joe-dev-wip-20150328.tar.gz: WARNING:root:wip/dump/dump.sql size is 18. Should have been bigger than 104857600.
Other possible failures : someone created an account without asking anyone. The hash sum of the file will change. Here is the alert generated by Backup Checker:
$ cat /var/log/backupchecker.log WARNING:root:1 file with unexpected hash while checking /var/archives/laptop-home-joe-dev-wip-20150328.tar.gz: WARNING:root:wip/config/accounts hash is 27c9d75ba5a755288dbbf32f35712338. Should have been 27c9d75ba3a755288dbbf32f35712338.
Another possible failure: someone accidentally (or not) removed the data directory! Backup Checker will detect the missing directory and warn you:
$ cat /var/log/backupchecker.log WARNING:root:1 file missing in /var/archives/laptop-home-joe-dev-wip-20150328.tar.gz: WARNING:root:wip/data
Awesome isn’t it? The power of a backup tool combined with an automated backup checker. No more surprise when you need your backups. Moreover you spare the waste of time and efforts to control the backup by yourself.
What about you? Let us know what you think of it. We would be happy to get your feedbacks. The project cares about our users and the outdated feature was a awesome idea in a feature request by one of the Backup Checker user, thanks Laurent!
I recently learnt that my former coworker Jonny took his efforts around his own monitoring system Bloonix and moved to self-employment.
If you're considering to outsource your monitoring consider Bloonix. As a plus all the code is open under GPLv3 and available on GitHub. So if you do not like to outsource it you can still build up an instance on your own. Since this has been a one man show for a long time most of the documentation is still in german. Might be a pro for some but a minus for others, if you like Bloonix I guess documentation translations or a howto in english is welcome. Beside of that Jonny is also the upstream author of a few Perl modules like libsys-statistics-linux-perl.
So another one has taken the bold step to base his living on free and open source software, something that always has my admiration. Jonny, I hope you'll succeed with this step.
A coworker and I debugged a fascinating problem today.
They had a tomcat7 installation with a couple of webapps, and one of the bundled libraries was logging in German. Everything else was logging in English (the webapps themselves, and the things the other bundled libraries did).
We searched around a bit, and eventually found that the wrongly-logging library (something jaxb/jax-ws) was using, after unravelling another few layers of “library bundling another library as convenience copy” (gah, Java!), com.sun.xml.ws.resources.WsservletMessages which contains quite a few com.sun.istack.localization.Localizable members. Looking at the other classes in that package, in particular Localizer, showed that it defaults to the java.util.Locale.getDefault() value for the language.
Which is set from the environment.
Looking at /proc/pid-of-JVM-running-tomcat7/environ showed nothing, “of course”. The system locale was, properly, set to English. (We mostly use en_GB.UTF-8 for better paper sizes and the metric system (unless the person requesting the machine, or the admin creating it, still likes the system to speak German *shudder*), but that one still had en_US.UTF-8.)
Browsing the documentation for java.util.Locale proved more fruitful: it also contains a setDefault method, which sets the new “default” locale… JVM-wide.
Turns out another of the webapps used that for some sort of internal localisation. Clearly, the containment of tomcat7 is incomplete in this case.
Documenting for the larger ’net, in case someone else runs into this. It’s not as if things like this would be showing up in the USA, where the majority of development appears to happen.