Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 38 sec ago

Ingo Juergensmann: Letsencrypt: challenging challenges

7 January, 2016 - 00:56

On December 3rd 2015 the Letsencrypt project went to public beta and this is a good thing! Having more and more websites running on good and valid SSL certificates for their HTTPS is a good thing, especially because Letsencrypt takes care of renewing the certs every now and then. But there are still some issues with Letsencrypt. Some people criticize the Python client needing root priviledges and such. Others complain that Letsencrypt currently only supports webservers.

Well, I think for a public beta this is what we could have expected from the start: the Letsencrypt project focussed on a reference implementation and there are already other implementations being available. But one thing seems to a problem within the design of how Letsencrypt works as it uses a challenge response method to verify that the requesting user is controlling the domain for which the certificate shall be issued. This might work well in simple deployments, but what about a little more complex setups like multiple virtual machines and different protocols involved.

For example: you're using domain A for your communication like user@example.net for your mail, XMPP and SIP. Your mailserver runs on one virtual machine, whereas the webserver is running on a different virtual machine. The same for XMPP and SIP: a seperate VM as well. 

Usually the Letsencrypt approach would be that you configure your webserver (by configure /.well-known/acme-challenge/* location or use a standalone server on port 443) to handle the challenge response requests. This would give you a SSL cert for your webserver example.net. Of course you could copy this cert to your mail-, XMPP- and SIP-server, but then again you have to do this everytime the SSL cert gets renewed.

Another challenge is of course that you are not only have one or two domains, but a bunch of domains. In my case I host approximately >60 domains. Basically the mail for all domains are handled by my mailserver running on its own virtual machine. The webserver is located on a different VM. For some domains I offer XMPP accounts on a third VM.

What is the best way to solve this problem? Moving everything to just one virtual machine? Naaah! Writing some scripts to copy the certs as needed? Not very smart as well. Using a network share for the certs between all VMs? Hmmm... would that work?

And what about TLSA entries of your DNSSEC setup? When a SSL cert is renewed than the fingerprint might need an update in your DNS zone - for several protocols like mail, XMPP, SIP and HTTPS. At least the Bash implementation of Letsencrypt offers a "hook" which is called after the SSL cert has been issued.

What are you ways to deal with this kind of handling the ACME protocol challenges and multi-domain, multi-VM setup?

Kategorie: DebianTags: DebianLetsEncryptLinuxSoftware 

Daniel Pocock: Want to use free software to communicate with your family in Christmas 2016?

6 January, 2016 - 19:25

Was there a friend or family member who you could only communicate with using a proprietary, privacy-eroding solution like Skype or Facebook this Christmas?

Would you like to be only using completely free and open solutions to communicate with those people next Christmas?

Developers

Even if you are not developing communications software, could the software you maintain make it easier for people to use "sip:" and "xmpp:" links to launch other applications? Would this approach make your own software more convenient at the same time? If your software already processes email addresses or telephone numbers in any way, you could do this.

If you are a web developer, could you make WebRTC part of your product? If you already have some kind of messaging or chat facility in your website, WebRTC is the next logical step.

If you are involved with the Debian or Fedora projects, please give rtc.debian.org and FedRTC.org a go and share your feedback.

If you are involved with other free software communities, please come to the Free-RTC mailing list and ask how you can run something similar.

Everybody can help

Do you know any students who could work on RTC under Google Summer of Code, Outreachy or any other student projects? We are particularly keen on students with previous experience of Git and at least one of Java, C++ or Python. If you have contacts in any universities who can refer talented students, that can also help a lot. Please encourage them to contact me directly.

In your workplace or any other organization where you participate, ask your system administrator or developers if they are planning to support SIP, XMPP and WebRTC. Refer them to the RTC Quick Start Guide. If your company web site is built with the Drupal CMS, refer them to the DruCall module, it can be installed by most webmasters without any coding.

If you are using Debian or Ubuntu in your personal computer or office and trying to get best results with the RTC and VoIP packages on those platforms, please feel free to join the new debian-rtc mailing list to discuss your experiences and get advice on which packages to use.

Everybody is welcome to ask questions and share their experiences on the Free-RTC mailing list.

Please also come and talk to us at FOSDEM 2016, where RTC is in the main track again. FOSDEM is on 30-31 January 2016 in Brussels, attendance is free and no registration is necessary.

This mission can be achieved with lots of people making small contributions along the way.

Norbert Preining: TeX Live security improvements

6 January, 2016 - 11:43

Today I committed a set of changes to the TeX Live subversion repository that should pave the way for better security handling in the future. Work is underway to use strong cryptographic signatures to verify that packages downloaded and installed into a TeX Live installation have not been tinkered with.

While there is still a long way to go and to figure out, the current changes already improve the situation considerably.

Status up to now

Although we did ship size and checksum information within the TeX Live database, these information were only considered by the installer when re-starting an installation to make sure that the downloaded packages are the ones we should use.

Neither the installer nor tlmgr did use the checksum to verify that the downloaded packages is correct, relying mostly on the fact that the packages are xz-compressed and would create rubbish when there is a transfer error.

Although none of us believes that there is a serious interest in tinkering with the TeX Live distribution – maybe to steal just another boring scientific paper? – the door was still open.

Changes implemented

The changes committed to the repository today, which will be in a testing phase to get rid of bugs, consists of the following:

  • unification of installation routines: it didn’t make sense to have duplication of the actual download and unpack code in the installer and in tlmgr, so the code base was simplified and unified, and both the installer and tlmgr now use the same code paths to obtain, unpack, and install a package.
  • verification of size and checksum data: hand-in-hand with the above change, verification of downloaded packages based on both the size as well as the checksum is now carried out.

Together these two changes will allow install-tl/tlmgr to verify that a package (.tar.xz) is according to the information in the accompanying texlive.tlpdb. This still leaves the option of tampering with a package and updating the texlive.tlpdb for the fixes checksums/sizes.

Upcoming changes

For the future we do plan mostly two things:

  • switch to stronger hashing algorithm: till now we use md5, but we will switch to sha512 instead.
  • GnuPG signing of the checksum file of the texlive.tlpdb, that is detached signature of texlive.tlpdb.checksum

The last step above will give a very high level of security, as it will be not practically possible to alter the information in the texlive.tlpdb, thus no tampering with the checksum information of the containers, and in turn no tampering with the actual containers will be possible.

Restrictions

Due to the wide range of supported architectures and operating systems, we will not make verification obligatory, but if a gpg binary is found, it will be used to verify the downloaded texlive.tlpdb.checksum.

Details have to be hammered out and the actual programming has to be done, but we are on the way.

Enjoy.

Gunnar Wolf: Starting work / call for help: Debianizing Drupal 8

6 January, 2016 - 09:39

I have helped maintain Drupal 7.x in Debian for several years (and am now the leading maintainer). During this last year, I got a small article published in the Drupal Watchdog, where I stated:

What About Drupal 8?

Now... With all the hype set in the future, it might seem striking that throughout this article I only talked about Drupal 7. This is because Debian seeks to ship only production­ ready software: as of this writing, Drupal 8 is still in beta. Not too long ago, we still saw internal reorganizations of its directory structure.

Once Drupal 8 is released, of course, we will take care to package it. And although Drupal 8 will not be a part of Debian 8, it will be offered through the Backports archive, following all of Debian's security and stability requirements.

Time passes, and I have to take care of following what I promise. Drupal 8 was released on November 18, so I must get my act together and put it in shape for Debian!

So, prompted by a mail by a fellow Debian Developer, I pushed today to Alioth the (very little) work I have so far done to this effect; every DD can now step in and help (and I guess DMs and other non-formally-affiliated contributors, but I frankly haven't really read how you can be a part of collab-maint).

So, what has been done? What needs to be done?

Although the code bases are massively different, I took the (un?)wise step to base off the Drupal7 packaging, and start solving Lintian problems and installation woes. So far, I have an install that looks sane (but has not yet been tested), but has lots of Lintian warnings and errors. The errors are mostly of missing sources, as Drupal8 inlines many unrelated projects (fortunately documented and frozen to known-good versions) in it; there are two possible courses of action:

  1. Prefered way: Find which already made Debian package provides each of them, remove from the binary package, declare dependency.
  2. Pragmatic way: As the compatibility must sometimes be to a specific version level, just provide the needed sources in debian/missing-sources

We can, of course, first go pragmatic and later start reviewing what can be safely depended upon. But for this, we need people with better PHP experience than me (which is not much to talk about). This cleanup will correspond with cleaning up the extra license file Lintian warnings, as there is one for each such project — Of course, documenting each such license in debian/copyright is also a must.

Anyway, there is quite a bit of work to do. And later, we have to check that things work reliably. And also, quite probably, adapt any bits of dh-make-drupal to work with v8 as well as v7 (and I am not sure if I already deprecated v6, quite probably I have).

So... If you are interested in working on this, please contact me directly. If we get more than a handful, building a proper packaging team might be in place, otherwise we can just go on working as part of collab-maint.

I am very short on time, so any extra hands will be most helpful!

Lior Kaplan: PHP 5 Support Timeline

6 January, 2016 - 06:52

With the new year starting the PHP project is being asked to decide about the PHP 5 support timeline.

While Aligning PHP 5.6 support timeline with the release date of PHP 7.0 seems like common sense to keep the support schedule continuous, there’s a big question whether to extend it further to an additional one year of security support till the end of 2018. This would make PHP 5.6, the last of the PHP 5 branch, to have 2 years of security support and de facto getting the same life span as PHP 7.0 would (ending support of both in Dec 2018).

But beside of support issues, this also affects adoption rate of PHP 7.0, serving as a catalyst due to end of support for the PHP 5 branch. My concerns are that with the additional security support the project would need to deal with the many branches (5.6, 7.0, 7.1, 7.2 and then to be 7.3 branch).

I think we should limit what we guarantee (meaning keeping only one year of security support till end of 2017), and encourage project members and the eco-system (e.g. Linux distributions) to maintain further security based on best effort.

This is already the case for out of official support releases like the 5.3 and 5.4 branches (examples for backports done by Debian: 5.3 and 5.4). And of course, we also have companies that make their money out of long term support (e.g. RedHat).

On the other hand, we should help the eco system in doing such extended support, and hosting backported fixes in the project’s git repo instead of having each Linux disto do the same patch work on its own.


Filed under: PHP

Daniel Pocock: Promoting free software and free communications on popular social media networks

5 January, 2016 - 21:27

Sites like Twitter and Facebook are not fundamentally free platforms, despite the fact they don't ask their users for money. Look at how Facebook's censors confused Denmark's mermaid statue with pornography or how quickly Twitter can make somebody's account disappear, frustrating public scrutiny of their tweets and potentially denying access to vital information in their "direct message" mailbox. Then there is the fact that users don't get access to the source code, users don't have a full copy of their own data and, potentially worst of all, if most people bothered to read the fine print of the privacy policy they would find it is actually a recipe for downright creepiness.

Nonetheless, a significant number of people have accounts in these systems and are to some extent contactable there.

Many marketing campaigns that have been successful today, whether they are crowdfunding, political activism or just finding a lost cat claim to have had great success because of Twitter or Facebook. Is this true? In reality, many users of those platforms follow hundreds of different friends and if they only check-in once a day, filtering algorithms show them only a small subset of what all their friends posted. Against these odds, just posting your great idea on Facebook doesn't mean that more than five people are actually going to see it. Those campaigns that have been successful have usually had something else going in their favour, perhaps it was a friend working in the media who gave their campaign a plug on his radio show or maybe they were lucky enough to be slashdotted. Maybe it was having the funds for a professional video production with models who pass off as something spontaneous. The use of Facebook or Twitter alone did not make such campaigns successful, it was just part of a bigger strategy where everything fell into place.

Should free software projects, especially those revolving around free communications technology, use such platforms to promote themselves?

It is not a simple question. In favour, you could argue that everything we promote through public mailing lists and websites is catalogued by Google anyway, so why not make it easier to access for those who are on Facebook or Twitter? On top of that, many developers don't even want to run their own mail server or web server any more, let alone a self-hosted social-media platform like pump.io. Even running a basic SIP proxy server for the large Debian and Fedora communities involved a lot of discussion about the approach to support it.

The argument against using Facebook and Twitter is that you are shooting yourself in the foot, when you participate in those networks, you give them even more credibility and power (which you could quantify using Metcalfe's law). The Metcalfe value of their network, being quadratic rather than linear, shoots ahead of the Metcalfe value of your own solution, putting your alternative even further out of reach. On top of that, the operators of the closed platform are able to evaluate who is responding to your message and how they feel about it and use that intelligence to further undermine you.

How do you feel about this choice? How and when should free software projects and their developers engage with mainstream social media technology? Please come and share your ideas on the Free-RTC mailing list or perhaps share and Tweet them.

Benjamin Mako Hill: Celebrate Aaron Swartz in Seattle (or Atlanta, Chicago, Dallas, NYC, SF)

5 January, 2016 - 08:07

I’m organizing an event at the University of Washington in Seattle that involves a reading, the screening of a documentary film, and a Q&A about Aaron Swartz. The event coincides with the third anniversary of Aaron’s death and the release of a new book of Swartz’s writing that I contributed to.

The event is free and open the public and details are below:

WHEN: Wednesday, January 13 at 6:30-9:30 p.m.

WHERE: Communications Building (CMU) 120, University of Washington

We invite you to celebrate the life and activism efforts of Aaron Swartz, hosted by UW Communication professor Benjamin Mako Hill. The event is next week and will consist of a short book reading, a screening of a documentary about Aaron’s life, and a Q&A with Mako who knew Aaron well – details are below. No RSVP required; we hope you can join us.

Aaron Swartz was a programming prodigy, entrepreneur, and information activist who contributed to the core Internet protocol RSS and co-founded Reddit, among other groundbreaking work. However, it was his efforts in social justice and political organizing combined with his aggressive approach to promoting increased access to information that entangled him in a two-year legal nightmare that ended with the taking of his own life at the age of 26.

January 11, 2016 marks the third anniversary of his death. Join us two days later for a reading from a new posthumous collection of Swartz’s writing published by New Press, a showing of “The Internet’s Own Boy” (a documentary about his life), and a Q&A with UW Communication professor Benjamin Mako Hill – a former roommate and friend of Swartz and a contributor to and co-editor of the first section of the new book.

If you’re not in Seattle, there are events with similar programs being organized in Atlanta, Chicago, Dallas, New York, and San Francisco.  All of these other events will be on Monday January 11 and registration is required for all of them. I will be speaking at the event in San Francisco.

Stig Sandbeck Mathisen: Munin 3 packaging

5 January, 2016 - 06:00

The Munin project is moving slowly closer to a Munin 3 release. In parallel, the Debian packaging is changing, too.

The new web interface is looking much better than the traditional web-1.0 interface normally associated with munin.

New package layout perl libraries

All the Munin perl libraries are placed in “libmunin-*-perl”, and split into separate packages, where the split is decided mostly on dependencies.

If you don’t want to monitor samba, or SNMP, or MySQL, there should be no need to have those libraries installed. That does mean more binary packages, on the other hand.

Munin master

Munin now runs as a standalone HTTPD, it no longer graphs from cron, nor does it run as CGI or FastCGI scripts.

The user “munin” grants read-write access, while the group “munin” grants read only access. The new web interface runs as the “munin-httpd” user, which is member of the “munin” group.

There is a “munin” service. For now, it runs rrdcached for the munin user and RRD directory.

munin node

The perl “munin-node” and the compiled “munin-node-c” should be interchangeable, and be able to run the same plugins.

Munin node, and Munin async node, should be wholly separate from the munin master. It should be possible to use the perl “munin-node” package, and the

munin plugins

The munin plugins are placed separate packages named “munin-plugins-*”. The split is based on monitoring subject, or dependencies. They depend on appropriate “libmunin-plugin-*-perl” packages

The “munin-plugins-c” package, which is is from the “munin-node-c” source, contains a number of compiled plugins which should use less resources than their shell, perl or python equivalents.

Plugins from other sources than “munin” must work similar to the ones from “munin”. More work on this is needed.

Testing

Late December 2015, I set up Jenkins, with jenkins-debian-glue to build packages, test with autopkgtest and and update my development apt repository on each commit. That helped developing and testing the new Munin packages.

The packages are not quite ready to upload to experimental, but they are continuously deployed to weed out bugs. They can be found in my packaging apt repo. (The usual non-guarantees apply, handle with care, keep away from small children, etc…)

Comments

Munin developers, packagers and users hang out on “#munin” on the OFTC network. Please drop by if you have questions or comments.

Clint Adams: Real world Cryptol

5 January, 2016 - 03:07

cryptol is now in testing.

If you hurry, you can contribute to upstream implementations of CAST5 and Twofish before Christmas this Thursday.

Michal Čihař: Enca 1.17

4 January, 2016 - 21:45

Last version of Enca has been released more than year ago and now it's time for new release. There are various compatibility fixes which have been committed to the Git repository meanwhile.

If you don't know Enca, it is an Extremely Naive Charset Analyser. It detects character set and encoding of text files and can also convert them to other encodings using either a built-in converter or external libraries and tools like libiconv, librecode, or cstocs.

Full list of changes for 1.17 release:

  • Fixed conversion of GB2312 encoding with iconv
  • Fixed iconv conversion on OSX
  • Documentation improvements
  • Fixed execution of external converters with ACLs
  • Improved test coverage to 80%

Still enca is in maintenance mode only and I have no intentions to write new features. However there is no limitation to other contributors :-).

You can download from http://cihar.com/software/enca/.

Filed under: Enca English | 0 comments

Mike Gabriel: My FLOSS activities in December 2015

4 January, 2016 - 19:17

December 2015 was a month mainly dedicated to work for local contractors (local schools mainly) and my employer (University of Kiel, Git server migration).

At the end of the month I had the privilege of attending the 32c3 ([1]) where we had a little sprint for the Arctica Project. Thanks to my family and esp. to my wonderful wife for letting me attend this always fascinating event at the end of each year.

Horde Hacking

One of my local customers is really interested in using a non-gated-community mail provider, so he asked me to host his company's mail addresses on my mail company's server. Something I regularly don't offer (anymore) except for dear friends and very patient customers.

This customer sponsored several more work hours on hacking on the Kolab_Storage code in Horde and proposing bug fixes upstream [2,3,4,5,6,7,8]. Thanks for supporting my work on the Horde Groupware Framework. Thanks to Horde upstream maintainers (esp. Michael Rubinsky) for reacting on my bug submissions so promptly.

Debian and Debian LTS

Locally, I did a lot of work for our Debian Edu / Skolelinux customers again this months.

read more

Bernd Zeimetz: open-vm-tools updated

4 January, 2016 - 17:06

In January 2014 the open-vm-tools package was orphaned and I took the chance to take over the maintenance. Unfortunately the package is still not 100% in the shape I’d like to see it, but I’m getting closer. I have to say Thank You for a lot of good bug reports, especially for those use cases which are hard to test/reproduce for me (running Debian in a Windows-based VMware Workstation Player for example….).

At conova communications GmbH, the company I work for, we are using the package on all of our Debian VMs, both for customer and internal use. It is essential for us to have properly working open-vm-tools - not only to be able to shutdown the VM from VMware vCenter, but also because tools like vSphere Data Protection and Veeam depend on it. Good thing is that I can work on and test the package at work and breakages are detected early and fast normally.

Getting a good contact to the VMware upstream was easy and the developers there are helpful and reply pretty fast to their emails. Also as it seems there are finally “real” commits showing up in the open-vm-tools github repository again, not only huge single commits with a full release. It is not only nice to see that they are moving into the right direction again, but also this is really helpful in fixing (urgent) bugs before the next release of open-vm-tools - or to backport a fix to the versoin in stable/oldstable.

Since a few days we have open-vm-tools 10.0.5-3227872 in

  • testing & unstable
  • jessie-backports
  • wheezy-backports-sloppy

If you are using VMware ESX 5.5 or newer, you should upgrade to the backports versions. Same if you use a recent VMware player version.

Please note that since 10.0.0 the open-vm-dkms package is only necessary if you need the legacy vmxnet module. This is only the case if you are using very old VM hardware versions. vmxnet3 is shipped in the Debian kernel, so you don’t need to compile extra modules to use it. The vmhgfs module was replaced by a fuse-based implementation.

If you’d like to help maintaining the package, please send bugs/patches via the Debian BTS or even better - send pull requests for pkg-open-vm-tools. The repository is mirrored to git.bzed.at in case you want to avoid github.

Alessio Treglia: Enterprise Innovation in a Transformative Society

4 January, 2016 - 16:13

 

Recent article by professors Karim Lakhani and Marco Iansiti on the Harvard Business Review, “Digital Ubiquity: How Connection, Sensors and Data are Revolutionizing Business”, gave me the opportunity for interesting insights and considerations.

Digital technology evolution and the development of modern “Internet of Things” devices are introducing huge transformative effects within social inter-relationships and its business models. These effects can not be ignored if we want to perceive – with the right clarity and meaning – the innovation process that inevitably comes with it.

The three fundamental properties of digital technology…

<Read More…>

Mike Gabriel: MATE 1.12 landed in Debian unstable

4 January, 2016 - 12:49

Yesterday, I did a bundle upload of all MATE 1.12 related packages to Debian unstable. Packages are currently building for the 22 architectures supported by Debian, build status can be viewed on the DDPO page of the Debian MATE Packaging Team [1]

Again a big thanks to the packaging team. Martin Wimpress amongst others did a fabulous job in bumping all packages towards the 1.12 release series before the Christmas holidays. Over the holidays, I was able to review his work (99% perfect) and upload all binary packages to a staging repository.

@Martin Wimpress: It is really time that we make a DM (Debian Maintainer) out of you!!!

After testing all MATE 1.12 packages on a Debian unstable system, I decided to do a bundle upload yesterday.

Lessons learned about bundling Debian uploads

It absolutely makes sense to hold back package uploads of a project like the MATE desktop until all relevant packages are reviewed, pre-built and tested.

When releasing MATE packages via the team's packaging Git [2], there are normally two actions to be taken on a package release:

  • commit "upload to unstable (debian/<pkg-version>)
  • tag that commit with "Debian release <pkg-version>

When reviewing so many Git projects, it is always problematic that people commit something else during the review phase. Especially, if the review work involves many packages (i.e., Git packaging repos) and requires several days or even weeks to get finished.

read more

Benjamin Mako Hill: The Boy Who Could Change the World: The Writings of Aaron Swartz

4 January, 2016 - 09:12

The New Press has published a new collection of Aaron Swartz’s writing called The Boy Who Could Change the World: The Writings of Aaron Swartz. I worked with Seth Schoen to introduce and help edit the opening section of book that includes Aaron’s writings on free culture, access to information and knowledge, and copyright. Seth and I have put our introduction online under an appropriately free license (CC BY-SA).

Over the last week, I’ve read the whole book again. I think the book really is a wonderful snapshot of Aaron’s thought and personality. It’s got bits that make me roll my eyes, bits that make me want to shout in support, and bits that continue to challenge me. It all makes me miss Aaron terribly. I strongly recommend the book.

Because the publication is post-humous, it’s meant that folks like me are doing media work for the book. In honor of naming the book their “progressive pick” of the week, Truthout has also published an interview with me about Aaron and the book.

Other folks who introduced and/or edited topical sections in the book are David Auerbach (Computers), David Segal (Politics), Cory Doctorow (Media), James Grimmelmann (Books and Culture), and Astra Taylor (Unschool). The book is introduced by Larry Lessig.

John Goerzen: Hiking a mountain with Ian Murdock

4 January, 2016 - 08:15

“Would you like to hike a mountain?” That question caught me by surprise. It was early in 2000, and I had flown to Tucson for a job interview. Ian Murdock was starting a new company, Progeny, and I was being interviewed for their first hire.

“Well,” I thought, “hiking will be fun.” So we rode a bus or something to the top of the mountain and then hiked down. Our hike was full of — well, everything. Ian talked about Tucson and the mountains, about his time as the Debian project leader, about his college days. I asked about the plants and such we were walking past. We talked about the plans for Progeny, my background, how I might fit in. It was part interview, part hike, part two geeks chatting. Ian had no HR telling him “you can’t go hiking down a mountain with a job candidate,” as I’m sure HR would have. And I am glad of it, because even 16 years later, that is still by far the best time I ever had at a job interview, despite the fact that it ruined the only pair of shoes I had brought along — I had foolishly brought dress shoes for a, well, job interview.

I guess it worked, too, because I was hired. Ian wanted to start up the company in Indianapolis, so over the next little while there was the busy work of moving myself and setting up an office. I remember those early days – Ian and I went computer shopping at a local shop more than once to get the first workstations and servers for the company. Somehow he had found a deal on some office space in a high-rent office building. I still remember the puzzlement on the faces of accountants and lawyers dressed up in suits riding in the elevators with us in our shorts and sandals, or tie-die, next to them.

Progeny’s story was to be a complicated one. We set out to rock the world. We didn’t. We didn’t set out to make lasting friendships, but we often did. We set out to accomplish great things, and we did some of that, too.

We experienced a full range of emotions there — elation when we got hardware auto-detection working well or when our downloads looked very popular, despair when our funding didn’t come through as we had hoped, being lost when our strategy had to change multiple times. And, as is the case everywhere, none of us were perfect.

I still remember the excitement after we published our first release on the Internet. Our little server that could got pegged at 100Mb of outbound bandwidth (that was something for a small company in those days.) The moment must have meant something, because I still have the mrtg chart from that day on my computer, 15 years later.

We made a good Linux distribution, an excellent Debian derivative, but commercial success did not flow from it. In the succeeding months, Ian and the company tried hard to find a strategy that would stick and make our big break. But that never happened. We had several rounds of layoffs when hoped-for funding never materialized. Ian eventually lost control of the company, and despite a few years of Itanium contract work after I left, closed for good.

Looking back, Progeny was life — compressed. During the good times, we had joy, sense of accomplishment, a sense of purpose at doing something well that was worth doing. I had what was my dream job back then: working on Debian as I loved to do, making the world a better place through Free Software, and getting paid to do it. And during the bad times, different people at Progeny experienced anger, cynicism, apathy, sorrow for the loss of our friends or plans, or simply a feeling to soldier on. All of the emotions, good or bad, were warranted in their own way.

Bruce Byfield, one of my co-workers at Progeny, recently wrote a wonderful memoriam of Ian. He wrote, “More than anything, he wanted to repeat his accomplishment with Debian, and, naturally he wondered if he could live up to his own expectations of himself. That, I think, was Ian’s personal tragedy — that he had succeeded early in life, and nothing else he did with his life could quite measure up to his expectations and memories.”

Ian was not the only one to have some guilt over Progeny. I, for years, wondered if I should have done more for the company, could have saved things by doing something more, or different. But I always came back to the conclusion I had at the time: that there was nothing I could do — a terribly sad realization.

In the years since, I watched Ubuntu take the mantle of easy-to-install Debian derivative. I saw them reprise some of the ideas we had, and even some of our mistakes. But by that time, Progeny was so thoroughly forgotten that I doubt they even realized they were doing it.

I had long looked at our work at Progeny as a failure. Our main goal was never accomplished, our big product never sold many copies, our company eventually shuttered, our rock-the-world plan crumpled and forgotten. And by those traditional measurements, you could say it was a failure.

But I have come to learn in the years since that success is a lot more that those things. Success is also about finding meaning and purpose through our work. As a programmer, success is nailing that algorithm that lets the application scale 10x more than before, or solving that difficult problem. As a manager, success is helping team members thrive, watching pieces come together on projects that no one person could ever do themselves. And as a person, success comes from learning from our experiences, and especially our mistakes. As J. Michael Straczynski wrote in a Babylon 5 episode, loosely paraphrased: “Maybe this experience will be a good lesson. Too bad it was so painful, but there ain’t no other kind.”

The thing about Progeny is this – Ian built a group of people that wanted to change the world for the better. We gave it our all. And there’s nothing wrong with that.

Progeny did change the world. As us Progeny alumni have scattered around the country, we benefit from the lessons we learned there. And many of us were “different”, sort of out of place before Progeny, and there we found others that loved C compilers, bootloaders, and GPL licenses just as much as we did. We belonged, not just online but in life, and we went on to pull confidence and skill out of our experience at Progeny and use them in all sorts of ways over the years.

And so did Ian. Who could have imagined the founder of Debian and Progeny would one day lead the cause of an old-guard Unix turning Open Source? I run ZFS on my Debian system today, and Ian is partly responsible for that — and his time at Progeny is too.

So I can remember Ian, and Progeny, as a success. And I leave you with a photo of my best memento from the time there: an original unopened boxed copy of Progeny Linux.

Carl Chenet: To be alerted if no more tweet is sent from your Twitter account: Twitterwatch

4 January, 2016 - 06:00

This version 0.1 of Twitterwatch is dedicated to Ian Murdock, Debian project Founder.

If you use an automated system like Feed2tweet in order to feed your Twitter account from your website of any RSS feed, it is of the utmost important to know when this system is broken, because you are losing traffic because of this malfunction.

At the Journal du hacker, a French-speaking FOSS website where people contribute weblinks like Hacker News or Lobste.rs, every new contribution is sent through the RSS feed to the Twitter account of the Journal du hacker. A lot of our readers only follow the news on Twitter. So if the link breaks, our readers are not fed any more :)

To be alerted to this issue, I wrote a small self hosted and documented app. I’m proud to introduce Twitterwatch !

It runs with Python 3.4 and the Tweepy library. You can install it from PyPI or from sources. The documentation is available on Readthedocs. You can read about how to install it, configure it and use it. It’s the first release but the main features do the job already.

In order to use it, you need a twitterwatch.ini file as following:

[twitter]
consumer_key=ml6jaiBnf3pkU9uIrKNIxAr3o
consumer_secret=4Cmljklzerkhfer4hlj3ljl2hfvc123rezrfsdatpokaelzerp
access_token=213416790-jgJnrJG5gz132nzerl5zerwi1ahmnwkfJFN9nr2j
access_token_secret=WjanlPgqDKlunJ4Knr11k2bnfk3jfnwkFjeriFZERj16Z
[schedule]
; duration between 2 checks in minutes
check_interval=60
[mail]
host=localhost
from=admin@myserver.org
to=foo@mylaptop.org

The most difficult part is now complete. To launch Twitterwatch, just launch the following command:

$ twitterwatch /path/to/twitterwatch.ini

Whenever no more new tweet is sent from your Twitter account  during 60 minutes,  you’ll receive the following mail (here the user is @journalduhacker):

Title: Twitter stream of journalduhacker has stopped

Body: Twitter stream of journalduhacker has stopped since 23/12/2015 - 14:49:23

It is still pretty rough, but it does the job. New features will come whenever I need them (or if you require/open bugs/submit push requests ;) ). Write a comment to tell me what you think of it or why you use it!


Russ Allbery: control-archive 1.7.0

4 January, 2016 - 05:43

First new release in a while. There haven't been a lot of changes to Usenet hierarchies. The primary change is more aggressive dropping of control messages for reserved hierarchies, mostly to suppress pointless email to news administrators.

There were also the following hierarchy updates:

  • wpg.* no longer has an active maintainer
  • Update metadata and PGP key for dictator.*

These changes are already live on the ftp.isc.org control.ctl file. You can get the latest version from the control-archive distribution page.

Gregor Herrmann: RC bugs 2015/53

4 January, 2016 - 03:59

& another round of RC bug fixes, still related to the perl 5.22 transition (& yay, 5.22 is in testing since some days!):

  • #808209 – amanda-common: "amanda-common: Depends on virtual package "perl5" which will is gone with perl/5.22"
    replace perl5 dependencies, NMU with maintainer's approval
  • #808321 – votca-csg-scripts: "votca-csg-scripts: Depends on virtual package "perl5" which will is gone with perl/5.22"
    fix dependency, upload to DELAYED/3
  • #809192 – src:libterm-termkey-perl: "libterm-termkey-perl: FTBFS: 05flags.t: Non-zero wait status: 11"
    set TERM for tests (pkg-perl)
  • #809198 – maildirsync: "maildirsync broken with perl 5.22"
    add upstream patch, upload to DELAYED/3
  • #809583 – src:libgenome-model-tools-music-perl: "libgenome-model-tools-music-perl: FTBFS: use Genome::Model::Tools::Music::Survival': Can't use 'defined(@array)"
    fix 'defined(@array)' error (pkg-perl)

Lunar: Reproducible builds: week 36 in Stretch cycle

3 January, 2016 - 23:28

What happened in the reproducible builds effort between December 27th and January 2nd:

Infrastructure

dak now silently accepts and discards .buildinfo files (commit 1, 2), thanks to Niels Thykier and Ansgar Burchardt. This was later confirmed as working by Mattia Rizzolo.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: banshee-community-extensions, javamail, mono-debugger-libs, python-avro.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

Untested changes:

  • fltk1.1/1.1.10-20 by Aaron M. Ucko, currently FTBFS.
  • fltk1.3/1.3.3-5 by Aaron M. Ucko, currently FTBFS.
reproducible.debian.net

The testing distribution (the upcoming stretch) is now tested on armhf. (h01ger)

Four new armhf build nodes provided by Vagrant Cascandian were integrated in the infrastructer. This allowed for 9 new armhf builder jobs. (h01ger)

The RPM-based build system, koji, is now in unstable and testing. (Marek Marczykowski-Górecki, Ximin Luo).

Package reviews

131 reviews have been removed, 71 added and 53 updated in the previous week.

58 new FTBFS reports were made by Chris Lamb and Chris West.

New issues identified this week: nondeterminstic_ordering_in_gsettings_glib_enums_xml, nondeterminstic_output_in_warnings_generated_by_breathe, qt_translate_noop_nondeterminstic_ordering.

Misc.

Steven Chamberlain explained in length why reproducible cross-building accross architectures mattered, and posted results of his tests comparing a stage1 debootstrapped chroot of linux-i386 once done from official Debian packages, the others cross-built from kfreebsd-amd64.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้