I do hang out in #debian-women on IRC, which shouldn't be much of a surprise after my last blog entry about my Feminist Year. And for readers of my blog it also shouldn't be much of a surprise that Music is an important part of my life. Recently a colleague from Debian though asked me in said IRC channel about whether I can recommend some female artists or bands. Which got me looking through my recommendations so far, and actually, there weren't many of those in here, unfortunately. So I definitely want to work on that because there are so many female singers, songwriters and bands out there that I totally would like to share with the broader audience.
I want to start out with a strong female voice who was introduced to me by another strong woman—thanks for that! Fiona Apple definitely has her own style and is something special, she stands out. Here are my suggestions:
- Hot Knife: This was the song I was introduced to her with. And I love the kettledrum rhythm and sound.
- Criminal: Definitely a different sound, but it was the song that won her a Grammy.
- Not About Love: Such a lovely composition. I do love the way she plays the piano.
Like always, enjoy!
today I had a short chat with a fellow DD living in a neighbouring country. nothing spectacular in itself; but it reminded me again that debian is more than creating an operating system together for me – it's also about a couple of friendships that grow out of it & which are dear to me.
this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.
You have a machine someplace, probably in The Cloud, and it has Linux installed, but not to your liking. You want to do a clean reinstall, maybe switching the distribution, or getting rid of the cruft. But this requires running an installer, and it's too difficult to run d-i on remote machines.
Wouldn't it be nice if you could point a program at that machine and have it do a reinstall, on the fly, while the machine was running?
This is what I've now taught propellor to do! Here's a working configuration which will make propellor convert a system running Fedora (or probably many other Linux distros) to Debian:
testvm :: Host testvm = host "testvm.kitenet.net" & os (System (Debian Unstable) "amd64") & OS.cleanInstallOnce (OS.Confirmed "testvm.kitenet.net") `onChange` propertyList "fixing up after clean install" [ User.shadowConfig True , OS.preserveRootSshAuthorized , OS.preserveResolvConf , Apt.update , Grub.boots "/dev/sda" `requires` Grub.installed Grub.PC ] & Hostname.sane & Hostname.searchDomain & Apt.installed ["linux-image-amd64"] & Apt.installed ["ssh"] & User.hasSomePassword "root"
And here's a video of it in action.
It was surprisingly easy to build this. Propellor already knew how to create a chroot, so from there it basically just has to move files around until the chroot takes over from the old OS.
After the cleanInstallOnce property does its thing, propellor is running inside a freshly debootstrapped Debian system. Then we just need a few more Propertites to get from there to a bootable, usable system: Install grub and the kernel, turn on shadow passwords, preserve a few config files from the old OS, etc.
It's really astounding to me how much easier this was to build than it was to build d-i. It took years to get d-i to the point of being able to install a working system. It took me a few part days to add this capability to propellor (It's 200 lines of code), and I've probably spent a total of less than 30 days total developing propellor in its entirity.
So, what gives? Why is this so much easier? There are a lot of reasons:
Technology is so much better now. I can spin up cloud VMs for testing in seconds; I use VirtualBox to restore a system from a snapshot. So testing is much much easier. The first work on d-i was done by booting real machines, and for a while I was booting them using floppies.
Propellor doesn't have a user interface. The best part of d-i is preseeding, but that was mostly an accident; when I started developing d-i the first thing I wrote was main-menu (which is invisible 99.9% of the time) and we had to develop cdebconf, and tons of other UI. Probably 90% of d-i work involves the UI. Jettisoning the UI entirely thus speeds up development enormously. And propellor's configuration file blows d-i preseeding out of the water in expressiveness and flexability.
Propellor has a much more principled design and implementation. Separating things into Properties, which are composable and reusable gives enormous leverage. Strong type checking and a powerful programming language make it much easier to develop than d-i's mess of shell scripts calling underpowered busybox commands etc. Properties often Just Work the first time they're tested.
No separate runtime. d-i runs in its own environment, which is really a little custom linux distribution. Developing linux distributions is hard. Propellor drops into a live system and runs there. So I don't need to worry about booting up the system, getting it on the network, etc etc. This probably removes another order of magnitude of complexity from propellor as compared with d-i.
This seems like the opposite of the Second System effect to me. So perhaps d-i was the second system all along?
I don't know if I'm going to take this all the way to propellor is d-i 2.0. But in theory, all that's needed now is:
- Teaching propellor how to build a bootable image, containing a live Debian system and propellor. (Yes, this would mean reimplementing debian-live, but I estimate 100 lines of code to do it in propellor; most of the Properties needed already exist.) That image would then be booted up and perform the installation.
- Some kind of UI that generates the propellor config file.
- Adding Properties to partition the disk.
cleanInstallOnce and associated Properties will be included in propellor's upcoming 1.1.0 release, and are available in git now.
Oh BTW, you could parameterize a few Properties by OS, and Propellor could be used to install not just Debian or Ubuntu, but whatever Linux distribution you want. Patches welcomed...
Look at that bug count!
At that pace, Jessy will happen before FOSDEM ;)
The UDD bugs interface currently knows about the following release critical bugs:
- In Total:
169 bugs affecting
- Affecting Jessie:
226 (key packages:
119) That's the number we need to get down to zero
before the release. They can be split in two big categories:
- Affecting Jessie and unstable:
147 (key packages:
85) Those need someone to find a fix, or to finish the
work to upload a fix to unstable:
- 28 bugs are tagged 'patch'. (key packages: 22) Please help by reviewing the patches, and (if you are a DD) by uploading them.
- 10 bugs are marked as done, but still affect unstable. (key packages: 6) This can happen due to missing builds on some architectures, for example. Help investigate!
- 109 bugs are neither tagged patch, nor marked done. (key packages: 57) Help make a first step towards resolution!
- Affecting Jessie only: 79 (key packages: 34) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
- Affecting Jessie and unstable: 147 (key packages: 85) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
- Affecting Jessie: 226 (key packages: 119) That's the number we need to get down to zero before the release. They can be split in two big categories:
How do we compare to the Squeeze release cycle?Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) 51 178 (124+54) 323 (190+133) 52 115 (78+37) 289 (190+99) 1 93 (60+33) 287 (171+116) 2 82 (46+36) 271 (162+109) 3 25 (15+10) 249 (165+84) 4 14 (8+6) 244 (176+68) 5 2 (0+2) 224 (132+92) 6 release! 212 (129+83) 7 release+1 194 (128+66) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)
- Tandem skydive! or alternatively: a real "one-way" plane ticket ☺
- Start point: ~4'250 meters above the ocean (14'000ft)
- End point: on the beach
- Total time: less than ten minutes
- Time in "real" free fall: according to Wikipedia, around 12 seconds, with most of the terminal velocity being reached at about 8 seconds
- Terminal velocity: ~190Km/h, ~120mph (again, according to Wikipedia)
- Time in "fake" free fall until the chute opened: around one minute
- Time spent going slowly down, with a few fun manoeuvres: don't remember, 3-5 minutes?
At the end of November, we had a team offsite planned, with lots of fun and exciting activities in a somewhat exotic location. I was quite looking forward to it, when - less than two weeks before the event - a colleague asked if anyone is interested in going skydiving as an extra activity. Without thinking too much, I said "yes" immediately, because: a) I've never done it before, and b) it sounded really cool! Other people said yes as well, so we were set up to have a really good time!
Of course, as the time counted down and we were approaching the offsite, I was thinking: OK, this sounds cool, but: will I be fine? do I have altitude sickness? All kinds of such, rather logistical, questions. In order to not think too much, I did exactly zero research on the topic (all mentions of Wikipedia above are from post-fact reading).
So, we went on the offsite - which was itself cool - and then, on the last day, right before going back, the skydive event!How it went
The weather on the day of the jump was nice, the sky not perfectly clear, just a bit of small clouds and some haze. We waited for our turn, got the instruction for what to do (and not to do!), got hooked into the harness, prepared everything, and then boarded the plane; it needed only a very short run before taking off the ground.
It took around ten minutes or so to get to the jump altitude, which I spent partially looking forward to it, partially trying to calm the various emotions I had - a very interesting mix. It was actually annoying just having to wait and wait the ten long minutes, I wished that we actually get to the jumping altitude faster. The altimeter on the instructor's hand was showing 4'000, then 4'100, 4'150, then he reminded me again what I need to do (or rather, not to do), and then - people were already jumping from the plane! I was third from our team to jump, and I had the opportunity to see how people were not simply "exiting" the plane, but rather - exiting and then immediately disappearing from view!
Finally we were on the edge of the door, a push and then - I'm looking down, more than two and a half miles of nothing between me and the ground. Just air and the thought - "Why did I do this"? - as I start falling. For the first around ten seconds, it's actually a free fall, gaining speed almost at standard acceleration, and the result - Weightlessness - it's the weirdest feeling ever: all your organs floating in your body, no compression or torsion forces. Much more weird a roller-coaster that never ends; then most I had on roller-coasters was around one second of such acceleration, and you still are in contact with the chair or the restraints, whereas this long fall was very confusing for my brain - it felt somewhat like when you're tripping and you need to do something to regain balance, except in sky-diving you can't do anything, of course. There's nothing to grab, nothing to hold on, and you keep falling.
After ten long seconds we reached terminal velocity, phase 1 ends, and phase 2 begins, in which - while still falling - the friction with the air compensates exactly the earth's pull and one is falling at a constant speed and it's the most wonderful state ever. Like floating on the air, except that you're actually falling at almost 200kph, and yes, the closest feeling to flying, I guess. It doesn't hurt that you're no longer weightless, which means back to some level of normality.
The location of the skydive was very beautiful: the blue ocean beneath, the blue sky above, somewhere to the side the beach, and the air filling the mouth and lungs without any effort is the only sign that I'm moving really fast. The way this whole thing feels is very alien if you never jumped before, but one gets accustomed to it quite fast - and that means I got too comfortable and excited and forgot the correct position to keep my legs in, the instructor reminded me, and as I put my legs back in the correct position, which is (among others) with the soles of the feet pointing up, I felt again the air going strongly into my shoes, and a thought crossed my mind: what if I the air blows off one of my shoes (the right one, more precisely) and I lose it? How do I get to the airport for the trip back? Will I look suspicious at the security check? The banality of this thought, given that I was still up in the air somewhere and travelling quite fast, was so comical that I started laughing ☺
And then, an unexpected noise, the chute opens, and I feel like someone is pulling me strongly up. Of course nobody is pulling "up", I'm just slowing down very fast on this final phase (Wikipedia says: 3 to 4g). And then, once at the new terminal velocity, the lack of wind noise and the quietness of everything around gives a different kind of awesome - more majestic and serene this time, rather than the adrenaline-filled moments before.
Because one is still up and the beach looks small, you actually feel that you're suspended in the air, almost frozen. Of course, that feeling goes away quickly when the instructor start telling me to pull the strings, and we enter a fast spin - so fast that my body is almost horizontal again - a reminder that we're still in the air, going somewhat fast, and not in "normal" conditions.
I'm again reminded of the speed once we get closer to the earth, the people on the beach start to get bigger fast, and now we're gliding over the beach and finally land in the sand. The adventure is over, but I'm still pumped up and my body is still full of adrenaline, and I feel like you've just been in heaven - which is true, for some definitions of ☺.
The first thing I realise is that the earth is very solid. And not moving at all. Everything is very very slow… which is both good and bad. My body is confused at the very fast sequence of events, and why did everything stop??Conclusion
I learned all about the terminal velocity, how fast you get there, and so on a day later, from Wikipedia and other sources. It helped explain and clarify the things I experienced during the dive, because there in the air I was quite confused (and my body even more so). Knowing this in advance would have spoiled the surprise, but on the other hand would have allowed me to enjoy the experience slightly better.
Looking back, I can say a few of things. First, it was really awesome - not what I was expecting, much more awesome (in the real sense of awe) than I thought, but also not as easy or trivial as I believed from just seeing videos of people "floating" during their dive. Yep, worth doing, and hard to actually put in words (I tried to, but I think this rambling is more confusing than helping).
Phase one was too long (and a bit scary), phase two was too short (and the best thing), phase three was relaxing (and just the right length).
I also wonder how it is to jump alone - without the complicated and heavy harness, without an instructor, just you up there. Oh, and the parachute. And the reserve parachute ☺, of course. Point is, this was awesome, but I was mostly a passive spectator, so I wonder what it feels like to be actually in control (as much as one can be, falling down) and responsible.
And finally, as we left the offsite location just a couple of hours after the skydive, and we had a 4½ hours flight back, I couldn't believe myself how slow everything was. I never experienced quite such a thing, I was sitting in this normal airplane flying high and fast, but for me everything was going in slow motion and I was bored out of my mind. Adrenaline aftershock or something like that? Also interesting!
I'm glad to announce that I've been awarded a 5,000 USD "Flash Grant" by the Shuttleworth Foundation.
Flash grants are an interesting funding model, which I've just learned about. You don't need to apply for them. Rather, you get nominated by current fellows, and then selected and approached by the foundation for funding. The grant amount is smaller than actual fellowships, but it comes with very few strings attached: furthering open knowledge (which is the foundation's core mission) and being transparent about how you use the money.
I'm lucky enough to already have a full-time job to pay my bills, and I do my Free Software activism mostly in my spare time. So I plan to use the money not to pay my bills, but rather to boost the parts of my Free Software activities that could benefit from some funding. I don't have a fully detailed budget yet but, tentatively: some money will go to fund Debsources development (by others), some into promoting my thoughts on the dark ages of Free Software, and maybe some into helping the upcoming release of Debian. I'll provide a public report at the end of the funding period (~6 months from now).
I'd like to thank the Shuttleworth Foundation for the grant and foundation's fellow Jonas Öberg for making this possible.
Weblate 2.1 has been released today. It comes with native Mercurial support, user interface cleanup and various other fixes.
Full list of changes for 2.1:
- Added support for Mercurial repositories.
- Replaced Glyphicon font by Awesome.
- Added icons for social authentication services.
- Better consistency of button colors and icons.
- Documentation improvements.
- Various bugfixes.
- Automatic hiding of columns in translation listing for small screens.
- Changed configuration of filesystem paths.
- Improved SSH keys handling and storage.
- Improved repository locking.
- Customizable quality checks per source string.
You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Ready to run appliances will be soon available in SUSE Studio Gallery.
If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.
Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!
This was written in response to a message with a list of demotivating behaviours in email interactions, like fingerpointing, aggressiveness, resistance when being called out for misbehaving, public humiliation for mistakes, and so on
There are times when I stumble on an instance of the set of things that were mentioned, and I think "ok, today I feel like doing some paid work rather than working on Debian".
If another day I wake up deciding to enjoy working on Debian, which I greatly do, I try and make sure that I can focus on bits of Debian where I don't stumble on any instances of the set of things that were mentioned.
Then I stumble on Gregor's GDAC and I feel like I'd happily lose one day of pay right now, and have fun with Debian.
I feel like Debian is this big open kitchen populated by a lot of people:
- some dump shit
- some poke the shit with a stick, contributing to the spread of the smell
- some carefully clean up the shit, which in the short term still contributes to the smell, but makes things better in the long term
- some prepare and cook, making a nice smell of food and NOMs
- some try out the food and tell us how good it was
I have fun cooking and tring out the food. I have fun being around people who cook and try out the food.
The fun in the kitchen seems to be correlated to several things, one of which is that it seems to be inversely proportional to the stink.
I find this metaphore interesting, and I will start thinking about the smell of a mailing list post. I expect it should put posts into perspective, I expect I will develop an instinct for it, so that I won't give a stinky post the same importance of a post that smells of food.
I also expect that the more I learn to tell the smell of food from the smell of shit, the more I can help cleaning it, and the more I can help telling people who repeatedly contribute to the stink to please try cooking instead, or failing that, just try and stay out of the kitchen.
Together with members of almost all credativ offices world-wide I travelled to Madrid in October to attend PGConf Europe, the most important European PostgreSQL event. The conference, as usual, was greatly organized and had a lot of interesting presentations and thus, was rightfully sold-out. It again brought together a lot of the PostgreSQL community.
My non-technical presentation about whether Open Source is a blessing or a curse attracted a sizeable audience despite starting early. Also we got into a good discussion about some of the points raised which again showed that Open Source in general and PostgreSQL in particular are more and more considered because of their strategic importance.
Then in November I was invited to do a presentation at Open Source India Days 2014 in Bangalore or as it nowadays is called again Bengaluru. I was able to conveniently combine the trip to the conference with a scheduled visit to our office there.
The conference was a very pleasant surprise. Despite being called the largest Open-Source conference in Asia I never made it there before. And, frankly, I was impressed about the size. Maybe not so surprisingly the conference was dominated by presentations about cloud solutions and technology.
I, however, talked about the importance of community for businesses in particular pointing to Debian and PostgreSQL as very successful community projects. And again a large audience listened ini showing again the interest in strategic aspects of Open Source. Due to time constraints we weren't able to do a discussion during the presentation, or even allow questions/answers at the end, but a lot of people caught me afterwards to discuss points or make valuable remarks.
Both events are very worthy entries for next year's schedule.Categories:
My last problem with BTRFS was in August . BTRFS has been running mostly uneventfully for me for the last 4 months, that’s a good improvement but the fact that 4 months of no problems is noteworthy for something as important as a filesystem is a cause for ongoing concern.A RAID-1 Array
A week ago I had a minor problem with my home file server, one of the 3TB disks in the BTRFS RAID-1 started giving read errors. That’s not a big deal, I bought a new disk and did a “btrfs replace” operation which was quick and easy. The first annoyance was that the output of “btrfs device stats” reported an error count for the new device, it seems that “btrfs device replace” copies everything from the old disk including the error count. The solution is to use “btrfs device stats -z” to reset the count after replacing a device.
I replaced the 3TB disk with a 4TB disk (with current prices it doesn’t make sense to buy a new 3TB disk). As I was running low on disk space I added a 1TB disk to give it 4TB of RAID-1 capacity, one of the nice features of BTRFS is that a RAID-1 filesystem can support any combination of disks and use them to store 2 copies of every block of data. I started running a btrfs balance to get BTRFS to try and use all the space before learning from the mailing list that I should have done “btrfs filesystem resize” to make it use all the space. So my balance operation had configured the filesystem to configure itself for 2*3TB+1*1TB disks which wasn’t the right configuration when the 4TB disk was fully used. To make it even more annoying the “btrfs filesystem resize” command takes a “devid” not a device name.
I think that when BTRFS is more stable it would be good to have the btrfs utility warn the user about such potential mistakes. When a replacement device is larger than the old one it will be very common to want to use that space. The btrfs utility could easily suggest the most likely “btrfs filesystem resize” to make things easier for the user.
In a disturbing coincidence a few days after replacing the first 3TB disk the other 3TB disk started giving read errors. So I replaced the second 3TB disk with a 4TB disk and removed the 1TB disk to give a 4TB RAID-1 array. This is when it would be handy to have the metadata duplication feature and copies= option of ZFS.Ctree Corruption
2 weeks ago a basic workstation with a 120G SSD owned by a relative stopped booting, the most significant errors it gave were “BTRFS: log replay required on RO media” and “BTRFS: open_ctree failed”. The solution to this is to run the command “btrfs-zero-log”, but that initially didn’t work. I restored the system from a backup (which was 2 months old) and took the SSD home to work on it. A day later “btrfs-zero-log” worked correctly and I recovered all the data. Note that I didn’t even try mounting the filesystem in question read-write, I mounted it read-only to copy all the data off. While in theory the filesystem should have been OK I didn’t have a need to keep using it at that time (having already wiped the original device and restored from backup) and I don’t have confidence in BTRFS working correctly in that situation.
While it was nice to get all the data back it’s a concern when commands don’t operate consistently.Debian and BTRFS
I was concerned when the Debian kernel team chose 3.16 as the kernel for Jessie (the next Debian release). Judging by the way development has been going I wasn’t confident that 3.16 would turn out to be stable enough for BTRFS. But 3.16 is working reasonably well on a number of systems so it seems that it’s likely to work well in practice.
But I’m still deploying more ZFS servers.The Value of Anecdotal Evidence
When evaluating software based on reports from reliable sources (IE most readers will trust me to run systems well and only report genuine bugs) bad reports have a much higher weight than good reports. The fact that I’ve seen kernel 3.16 to work reasonably well on ~6 systems is nice but that doesn’t mean it will work well on thousands of other systems – although it does indicate that it will work well on more systems than some earlier Linux kernels which had common BTRFS failures.
But the annoyances I had with the 3TB array are repeatable and will annoy many other people. The ctree coruption problem MIGHT have been initially caused by a memory error (it’s a desktop machine without ECC RAM) but the recovery process was problematic and other users might expect problems in such situations.
I'm reading planet debian since many years, & I still enjoy it. I like the mixture of personal thoughts/stories & insightful technical tips. – today's recommendation: Don't ask your questions in private. very much agreed.
this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.
My first plan was to participate in the Bug Squashing Party (BSP) in Paris, but finally I didn’t manage to organize the travel and proper stay in the capital of France. Finally I settled on Munich and below you will find a summary of my stay there. While organizing my trip, the Munich natives offered me a couch to surf during the stay, which was very kind. In the end, however, I stayed at my friend’s place - thanks a lot Mehdi!
The BSP gathered Debian people interested in preparing Jessie release, but also KDE, Kolab and LibreOffice people. The host for the meeting was LiMux (on of the most known large-scale Linux deployments) which provided a venue, but also food (free!) and other attractions. Organizers, if you read this, I thank you for the amazing work that you have done.
I arrived on late Friday’s evening and so I didn’t participate that day in bug squashing and took a good sleep instead. On Saturday and Sunday we were, well, squashing bugs. I haven’t done NMUs before and it was a new experience to me, even though theoretically I know how it works. The DDs present at the BSP were more than helpful to help me when I was stuck. Here is a short description of what I have done:
#770085 - This is a bug I reported myself before BSP. I picked it as a low-hanging fruit and I had it fixed after 2 hours. It fixes a Python module that couldn’t be imported in Python 2.
#768690 - This was a tricky one. latex-mk depends on tgif which is not going in jessie. A “lazy” workaround I made in the patch is the disable tgif-related functionality completely. It is not essential to the package, so it should be ok (a shameless publicity here: I use and recommend latex-make).
#768615 - A simple one - the package testsuite is tightly wired with pygments version (actually, the package is a ruby wrapper around pygments) so it had to be fixed to reflect the version in jessie.
#768695 - I’ve spent nearly a whole day understanding and writing a patch. Long story short - statsmodels uses numpy, pandas and scipy, and its testsuite depends a lot of interfaces of these libraries. For example, it relies on pandas to have DateRange class which is not true for the version in jessie. We uploaded the patch which fixed most of the problems, but the build still failed on i386 due to some precision in floating point operations. The new patch was prepared and awaits upload.
#768673 - I’ve traced the problem to the recent POODLE attack (Wikipedia). Ruby-httpclient uses SSLv3 by default which is deprecated and disabled on the server. The SSL negotiation ends with a nasty ECONNRESET. This problem has been fixed in the newer version of the library so I just backport in my NMU. Note: I suspect that #770616 is related.
#768905 - keyutils testsuite breaks on newer versions of the kernel (i.e., > 3.16 which is in Jessie). The bug remains, but by testing we confirmed that it won’t affect jessie.
I also got my GPG keys signed and vice-versa, so I’m much better connected to the web of trust now (7 signatures of DDs so far).
All in all, it was a great experience which I recommend to everyone. The atmosphere is very motivating and there are many people who will gladly help you if you don’t know or understand something. You don’t have to be DD to participate even, but a minimal packaging experience is definitely very useful. I’m quite sure that people would be glad to help even a complete newcomer (I certainly would do), but being able to build a package using apt-get source and debuild won’t hurt anybody. At the very least you can always triage bugs: reproduce them, find the cause of the bug, propose a way to fix it, etc. As always you should be open-minded and willing to get your hands dirty.
I want also thank Debian for sponsoring me which finally convinced me to participate (Munich is around 500 km from where I live and the travel is fairly expensive).
Some links of interest:
Many bugs still wait to be squashed (120 at the time of writing this), so let’s get back to work!
(If I've linked you to this page, it is my feeble attempt to provide a more convincing justification.)
I often receive instant messages or emails requesting help or guidance at work or on one of my various programming projects.
When asked why they asked privately, the responses vary; mostly along the lines of it simply being an accident, not knowing exactly where to ask, as well as not wishing to "disturb" others with their bespoke question. Some will be more candid and simply admit that they were afraid of looking unknowledgable in front of others.
It is alwaystempting to simply reply with the answer, especially as helping another human is inherently rewarding unless one is a psychopath. However, one can actually do more good overall by insisting the the question is re-asked in a more public forum.
This is for many reasons. Most obviously, public questions are simply far more efficient as soon as more than one person asks that question — the response can be found in a search engine or linked to in the future. These time savings soon add up, meaning that simply more stuff can be done in any given day. After all, most questions are not so unique as people think.
Secondly, a private communication cannot be corrected or elaborated on if someone else notices it is incorrect or incomplete. Even this rather banal point is more subtle that it first appears — the lack of possible corrections deprives both the person asking and the person responding of the true and correct answer.
Lastly, conversations that happen in private are depriving others of the answer as well. Perhaps someone was curious but hadn't got around to asking? Maybe the answer—or even the question!—contains a clue to solving some other issue. None of this can happen if this is happens behind closed doors.
There are lots of subtler reasons too — in a large organisation or team, simply knowing what other people are curious about can be curiously valuable information.
Note that this is not—as you might immediately suspect—simply a way of ensuring that one gets the public recognition or "kudos" from being seen helping others.
I wouldn't deny that technical communities work on a gift economy basis to some degree, but to attribute all acts of assistance as "selfish" and value-extracting would be to take the argument too far in the other direction. Saying that, the lure and appeal of public recognition should not be understated and can certainly provide an incentive to elaborate and provide a generally superior response.
More philosophically, there's also something fundamentally "honest" about airing issues in an appropriately public and transparent manner. I feel it promotes a culture of egoless conversations, of being able to admit one's mistakes and ultimately a healthy personal mindset.
So please, take care not only in the way you phrase and frame your question, but also consider wider context in which you are asking it. And don't take it too personally if I ask you to re-ask elsewhere...
It uses cURL to navigate through the various jumps required by the protocol, perform the necessary posts, etc.
I haven’t read the Shibboleth specs, so it may not be the best way, and may not work in all cases, but that was enough for my case, at least. Feel free to improve it on Github Gists.
After the connection is succesful, one may reuse the .cookieJar file to perform further cURL connections, or even some automated content mirroring with httrack, for instance (see a previous experiment of mine with httrack for Moodle).
One of the attention-grabbing measures in the Autumn Statement by Chancellor George Osborne was the google tax on profits going offshore, which may prove unworkable (The Independent). This is interesting because a common mechanism for moving the profits around is so-called transfer pricing, where the business in one country pays an inflated price to its sibling in another country for some supplies. It sounds like the intended way to deal with that is by inspecting company accounts and assessing the underlying profits.
So what’s this got to do with Free Software? Well, one thing the company might buy from itself is a licence to use some branding, paying a fee for reachuse. The main reason this is possible is because copyright is usually a monopoly, so there is no supplier of a replacement product, which makes it hard to assess how much the price has been inflated.
One possible method of assessing the overpayment would be to compare with how much other businesses pay for their branding licences. It would be interesting if Revenue and Customs decide that there’s lots of Royalty Free licensing out there – including Free Software – and so all licence fees paid to related companies are a tax avoidance ruse. Similarly, any premium for a particular self-branded product over a generic equivalent could be classed as profit transfer.
This could have amusing implications for proprietary software producers who sell to sister companies but I doubt that the government will be that radical, so we’ll continue to see absurdities like Starbucks buying all their coffee from famous coffee producing countries Switzerland and the Netherlands. Shouldn’t this be stopped, really?
For the last few months, I have been working on a new version of Debian Code Search, and today it’s going live! I call it Debian Code Search Instant, for multiple reasons, see below.A lot faster
The old code search architecture was split across 5 different servers, however the search itself was only performed on a single machine with a network attached block volume backed by SSD. The new Code Search spreads out both the trigram index and the source code onto 6 different servers, and thanks to the new hardware generation, we have a locally connected SSD drive in each of them. So, we now get more than 6 times the IOPS we’ve had before, at much lower latency :).Grouping by Debian source package
The first feature request we ever got for Code Search was to group results by Debian source package. I’ve tried tackling that before, but it’s a pretty hard problem. With the new architecture and capacity, this finally becomes feasible.
After your query completes, there is a checkbox called “Group search results by Debian source package”. Enable it, and you’ll get neatly grouped results. For each package, there is a link at the bottom to refine the search to only consider that package.
In case you are more interested in the full list of packages, for example because you are doing large-scale changes across Debian, that’s also possible: Click on the ▾ symbol right next to the “Filter by package” list, and a curl command will be revealed with which you can download the full list of packages.Expensive queries (with lots of results) now possible
Previously, we had a 60 second timeout during which queries must complete. This timeout has been completely lifted, and long-running queries with tons and tons of results are now possible. We kindly ask you to not abuse this feature — it’s not very exciting to play with complexity explosion in regular expression engines by preventing others from doing their work. If you’re interested in that sort of analysis, go grab the source code and play with it on your own machine :).Instant results
While your query is running, you will almost immediately see the top 10 results, even though the search may still take a while. This is useful to figure out if your regular expression matches what you thought it would, before having to wait until your 5 minute query is finished.
Also, the new progress bar tells you how far the system is with your search.Instant indexing
Previously, Code Search was deployed to a new set of machines from scratch every week in order to update the index. This was necessary because the performance would severely degrade while building the index, so we were temporarily running with twice the amount of machines until the new version was up.
In the new architecture, we store an index for each source package and then merge these into one big index shard. This currently takes about 4 minutes with the code I wrote, but I’m sure this can be made even faster if necessary. So, whenever new packages are uploaded to the Debian archive, we can just index the new version and trigger a merge. We get notifications about new package uploads from FedMsg. Packages that are not seen on FedMsg for some reason are backfilled every hour.
The time between uploading a package and being able to find it in Debian Code Search therefore now ranges from a couple of minutes to about an hour, instead of about a week!New, beautiful User Interface
Since we needed to rewrite the User Interface anyway thanks to the new architecture, we also spent some effort on making it modern, polished and beautiful.
Smooth CSS animations make you aware of what’s going on, and the search results look cleaner than ever.Conclusion
At least to me, the new Debian Code Search seems like a significant improvement over the old one. I hope you enjoy the new features into which I put a lot of work. In case you have any feedback, I’m happy to hear it (see the “contact” link at the bottom of every Code Search site).
as russ wrote some weeks ago in an excellent mail to debian-vote, upstreams are the raison d'être for linux distributions. my experience in collaborating with upstream authors is mostly very positive, the most recent example from today being #728345. – thanks to all upstream authors for their passion in writing software & sharing it. & for caring about it later as well!
this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.
In November I resumed work on Debian LTS and worked on the following packages:
- DLA 88-1 for ruby1.8 fixing several CVEs as described in the announcement.
- DLA 91-1 for tomcat6, mostly prepared by one of it's maintainers, Tony Mancill, also fixing several issues by upgrading to a new upstream version. I just did review, testing, release and announcement, and then figured out the proper versioing for Wheezy, which turned out to be problematic because using the recommend versioning breaks the upgrade paths, when new upstream versions are introdued to squeeze-lts which basically have same version as in will be (or are) in wheezy-security. One cause (besides the wrong recommendation which still needs fixing in our wiki page are non enforcable version constraints: the suites wheezy and squeeze-lts reside on ftp-master.debian.org (and wheezy is only updated on point releases), while wheezy-security resides on security.debian.org. Most probably we will still leave things as they are for squeeze-lts and do changes for wheezy-lts only. Oh, and the release of the tomcat6 update for wheezy is currently stalled by #770769.
- DLA 92-1 for tomcat-native was also done in cooperation with Tony and is also a new upstream release, which is needed as the old version of tomcat-native doesn't function with the new tomcat6 version.
- The 188.8.131.52 update of linux-2.6 has not happened yet, but is planned for the coming weekend. So far it has been done in collaboration of Moritz Mühlenhoff from the security team, Ben Hutchings from the kernel team, and Raphaël Hertzog and myself from the LTS team, which I consider to be quite nice. As Raphaël had already explained, Ben has joined the LTS team and so far his contribution was to identify a problem in patch related to openvz, so I haven't published this kernel update yet. Also, there was zero feedback from testers for the openvz flavor packages - so if you are using openvz and squeeze kernels, please contact us. For all the other flavors there was positive feedback to the call for testing (thanks!) - so you might want to give these kernels a try too!
Thanks to everyone who is supporting Squeeze LTS in whatever form, according to the wide feedback there are quite many people appreciating the work!
(As this was asked on IRC: if you are a maintainer preparing something for squeeze-lts, that's totally great, thanks!, just please tell us about it as this is the assumption we are working under: if noone tells the LTS team, we think we need to prepare upgrades for squeeze-lts.)
Following the lead of my dear friend Daniel and his fantastic and addictive “Summing up” series, here’s a link pack of recent stuff I read around the web.
Link pack is definitely a terrible name, but I’m working on it.
How to Silence Negative Thinking
On how to avoid the pitfall of being a Negatron and not an Optimist Prime. You might be your own worst enemy and you might not even know it:
Psychologists use the term “automatic negative thoughts” to describe the ideas that pop into our heads uninvited, like burglars, and leave behind a mess of uncomfortable emotions. In the 1960s, one of the founders of cognitive therapy, Aaron Beck, concluded that ANTs sabotage our best self, and lead to a vicious circle of misery: creating a general mindset that is variously unhappy or anxious or angry (take your pick) and which is (therefore) all the more likely to generate new ANTs. We get stuck in the same old neural pathways, having the same negative thoughts again and again.
Meet Harlem’s ‘Official’ Street Photographer
A man goes around Harlem with his camera, looking to give instead of taking. Makes you think about your approach to people and photography, things can be simpler. Kinda like Humans of New York, but in Harlem. And grittier, and on film —but as touching, or more:
“I tell people that my camera is a healing mechanism,” Allah says. “Let me photograph it and take it away from you.”
What Happens When We Let Industry and Government Collect All the Data They Want
Why “having nothing to hide” is not about the now, but about the later. It’s not that someone is going to judge for pushing every detail of your life to Twitter and Instagram, it’s just that something you do might be illegal a few years later:
There was a time when it was essentially illegal to be gay. There was a time when it was legal to own people—and illegal for them to run away. Sometimes, society gets it wrong. And it’s not just nameless bureaucrats; it’s men like Thomas Jefferson. When that happens, strong privacy protections—including collection controls that let people pick who gets their data, and when—allow the persecuted and unpopular to survive.
The Sex-Abuse Scandal Plaguing USA Swimming
Abusive coaches and a bullying culture in sports training are the perfect storm for damaging children. And it’s amazing the extent to which a corporation or institution is willing to look the other way, as long as they save face. Very long piece, but intriguing to read.
What Cities Would Look Like if Lit Only by the Stars
Thierry Cohen goes around the world and builds beautiful and realistic composite images of how would big cities look like if lit only by stars. The original page has some more cities: Villes éteintes (Darkened Cities).
On Muppets & Merchandise: How Jim Henson Turned His Art into a Business
Lessons from how Jim Henson managed to juggle both art and business without selling out for the wrong reasons. Really interesting, and reminds you to put Henson in perspective as a very smart man who managed to convince everyone to give him money for playing with muppets. The linked video on How the Muppet Show is Made is also cool. Made me curious enough to get the book.
Barbie, Remixed: I (really!) can be a computer engineer
Mattel launched the most misguided book about empowering Barbie to be anything but a computer engineer in a book about being a computer engineer. The internet did not disappoint and fixed the problem within hours. There’s now even an app for that (includes user submitted pages).