The lsdvd project got a new set of developers a few weeks ago, after the original developer decided to step down and pass the project to fresh blood. This project is now maintained by Petter Reinholdtsen and Steve Dibb.
- Ignore 'phantom' audio, subtitle tracks
- Check for garbage in the program chains, which indicate that a track is non-existant, to work around additional copy protection
- Fix displaying content type for audio tracks, subtitles
- Fix pallete display of first entry
- Fix include orders
- Ignore read errors in titles that would not be displayed anyway
- Fix the chapter count
- Make sure the array size and the array limit used when initialising the palette size is the same.
- Fix array printing.
- Correct subsecond calculations.
- Add sector information to the output format.
- Clean up code to be closer to ANSI C and compile without warnings with more GCC compiler warnings.
This change bring together patches for lsdvd in use in various Linux and Unix distributions, as well as patches submitted to the project the last nine years. Please check it out. :)
I’ve just applied to be a (non uploading) Debian Developer. I’ve just filled in the form, and decrypted the message that I received to confirm my application (I had read the important documents long time ago, and again, some weeks ago, and again, some days ago).
I was expecting today to gather some GPG signs, but the event was cancelled (postponed). So beginning next week, I’ll try to gather GPG signs one by one, by myself.
Outdated translations of the website are finished (no more yellow stickers in the Spanish http://www.debian.org!), and I already began with the translation of new files.
I’ve sent mails to say thank you to some of the people that helped me during this phase of Debian Contributor.
I think I’ve done everything that I can do for now. So let’s wait.
I don’t know how will I sleep tonight.
You can comment in this pump.io thread.
Filed under: My experiences and opinion Tagged: Communities, Contributing to libre software, Debian, encryption, English, Free Software, Moving into free software, translations
Today I want to talk about the different approaches of Elektra and Config::Model. We have gotten a lot of questions lately about why Elektra is necessary and what differentiates it between similar tools like Config::Model. While there are a lot of similarities between Config::Model and Elektra, there are some key differences and that is what I will be focusing on in this post.
Once a specification is defined for Elektra and a plug-in is written to work with that specification, other developers will be able to reuse these specifications for programs that have similar configurations (such a specification and plug-in for the ini file type.) Additionally, specifications, once defined in KDB can be used across multiple programs. For instance, if I were to define a specification for my program within KDB:
Any other program could use my specification just by referring to show_hidden_files. These features allow Elektra to solve the problem of cross-platform configurations by providing a consistent API and also allow users to easily be aware of other applications’ configurations which allows for easier integration between programs.
Config:: Model also moves to provide a unified interface for configuration data and it also supports validation such as the type=Boolean like in the above example. The biggest differences between these two projects is that Elektra is intended for use by the programs themselves and by external GUIs and validation tools unlike Config::Model. Config::Model provides a tool allowing developers to provide a means for users to interactively edit configuration data in a safe way. Additionally, Elektra uses self-inscribing data. This means that all the specifications are saved within KDB and in metadata. More differences are that validators can be written in any language for Elektra because the specifications are just stored as data and they can enforce constraints on any access because plug-ins define the behavior of KDB itself.
Tying this all together with my GSoC project is the topic of three-way merges. Config::Model actually does not rely on a base for merges since the specifications all must be complete. This is a very good approach to handle merges in an advanced way too. This is an avenue that Elektra would like to explore in the future when we have enough specifications to handle all types on configuration.
I hope that this post clarifies the different approaches of Elektra and Config::Model. While both of these tools offer a better answer to configuration files they do have different goals and implementations that make them unique. I want to mention that we have a good relationship with the developers of Config::Model, who supported my Google Summer of Code Project. We believe that both of these tools have their own place and uses and they do not compete to achieve the same goals.
Ian S. Donnelly
Back in 2009, I set up githubredir.debian.net, a service that allowed following using uscan the tags of a GitHub-based project.
Maybe a year or two later, GitHub added the needed bits in their interface, so it was no longer necessary to provide this service. Still, I kept it alive in order not to break things.
But as it is just a silly web scraper, every time something changes in GitHub, the redirector breaks. I decided today that, as it is no longer a very useful project, it should be retired.
So, in the not too distant future (I guess, next time anything breaks), I will remove it. Meanwhile, every page generated will display this:
(of course, with the corresponding project/author names in)
Consider yourselves informed.AttachmentSize githubredir.png37.68 KB
The MirBSD Korn Shell has got a new security and maintenance release.
This release fixes one mksh(1)-specific issue when importing values from the environment. The issue has been detected by the main developer during careful code review, looking at whether the shell is affected by the recent “shellshock” bugs in GNU bash, many of which also affect AT&T ksh93. (The answer is: no, none of these bugs affects mksh.) Stephane Chanzelas kindly provided me with an in-depth look at how this can be exploited. The issue has not got a CVE identifier because it was identified as low-risk. The problem here is that the environment import filter mistakenly accepted variables named “FOO+” (for any FOO), which are, by general environ(7) syntax, distinct from “FOO”, and treated them as appending to the value of “FOO”. An attacker who already had access to the environment could so append values to parameters passed through programs (including sudo(8) or setuid) to shell scripts, including indirectly, after those programs intended to sanitise the environment, e.g. invalidating the last $PATH component. It could also be used to circumvent sudo’s environment filter which protected against the vulnerability of an unpatched GNU bash being exploited.
tl;dr: mksh not affected by any shellshock bugs, but we found a bug of our own, with low impact, which does not affect any other shell, during careful code review. Please do update to mksh R50c quickly.
You are a software vendor. You distribute software on multiple operating systems. Let’s say your software is a mildly popular internet browser. Let’s say its logo represents an animal and a globe.
Now, because you care about the security of your users, let’s say you would like the entire address space of your application to be randomized, including the main executable portion of it. That would be neat, wouldn’t it? And there’s even a feature for that: Position independent executables.
You get that working on (almost) all the operating systems you distribute software on. Great.
Then a Gnome user (or an Ubuntu user, for that matter) comes, and tells you they downloaded your software tarball, unpacked it, and tried opening your software, but all they get is a dialog telling them:
Could not display “application-name”
There is no application installed for “shared library” files
Because, you see, a Position independent executable, in ELF terms, is actually a (position independent) shared library that happens to be executable, instead of being an executable that happens to be position independent.
And nautilus (the file manager in Gnome and Ubuntu’s Unity) usefully knows to distinguish between executables and shared libraries. And will happily refuse to execute shared libraries, even when they have the file-system-level executable bit set.
You’d think you can get around this by using a .desktop file, but the Exec field in those files requires a full path. (No, ./ doesn’t work unless the executable is in the nautilus process current working directory, as in, the path nautilus was run from)
Dear lazyweb, please prove me wrong and tell me there’s a way around this.
Kudos to Matthew for taking a stance. It has, not surprisingly, provoked a lot of comments and feedback, most of it unpleasant.
If I did anything that was directly related to Intel, I'd join him, but I do very, very little architecture dependent stuff anymore.
I will, however, say this: Even if the "gamergate" were actually about good journalism and ethics (and it's clear it isn't), if your reaction to a differing opinion is abuse, harrassment, and other kinds of psychological violence, you're not making anything better, you're making it all worse.
Reasonable people can handle disagreement without any kind of violence.
Exactly 15 years ago I uploaded to Debian the first release of my whois client.
At the end of 1999 the United States Government forced Network Solutions, at the time the only registrar for the .com, .net and .org top level domains, to split their functions in a registry and a registrar and to and allow competing registrars to operate.
Since then, two whois queries are needed to access the data for a domain in a TLD operating with a thin registry model: first one to the registry to find out which registrar was used to register the domain, and then one the registrar to actually get the data.
Being as lazy as I am I tought that this was unacceptable, so I implemented a whois client that would know which whois server to query for all TLDs and then automatically follow the referrals to the registrars.
But the initial reason for writing this program was to replace the simplistic BSD-derived whois client that was shipped with Debian with one that would know which server to query for IP addresses and autonomous system numbers, a useful feature in a time when people still used to manually report all their spam to the originating ISPs.
Over the years I have spent countless hours searching for the right servers for the domains of far away countries (something that has often been incredibly instructive) and now the program database is usually more up to date than the official IANA one.
One of my goals for this program has always been wide portability, so I am happy that over the years it was adopted by other Linux distributions, made available by third parties to all common variants of UNIX and even to systems as alien as Windows and OS/2.
Now that whois is 15 years old I am happy to announce that I have recently achieved complete world domination and that all Linux distributions use it as their default whois client.
For my 31st birthday I decided to build myself a computer, specifically a NAS and backup server which could do some other bits and pieces. I ended up buying a system based on the Gigabyte J1900N-D3V SoC from Mini-ITX (who's after sales support is great, by the way).
I hope to write up a more comprehensive overview of what I've ended up with (probably in my rather dusty hardware section), but in the meantime I have a question for anyone else with this board:
If you've upgraded the BIOS, do the more recent BIOS versions insist on there being a display connected in order to boot?
Sadly the V1 BIOS version does, which seriously limits the utility of this board for my purposes. I did manage to flash the board up to V3, once, but it later decided to downgrade itself (believing the flashed BIOS to be corrupt). I haven't managed a second time. The EFI implementation in this board is... interesting. Convincing it to boot anything legacy is a tricky task.
As an aside, I recently stumbled across this suggestion on reddit to use an old-ish, Core-era Thinkpad T-series with a dock for this exact purpose: the spare ultrabay gives you two SATA drive slots; the laptop battery serves as a crude UPS and there's a built in keyboard and mouse, avoiding the issue I'm having with the J1900N-D3V. A Core i5 is more than fast enough for what I want to do and it will have vt. Hindsight is a wonderful thing...
It's taken me a while to get sufficiently riled up about Australia's current Islamaphobia outbreak, but it's been brewing in me for a couple of weeks.
For the record, I'm an Atheist, but I'll defend your right to practise your religion, just don't go pushing it on me, thank you very much. I'm also not a huge fan of Islam, because it does seem to lend itself to more violent extremism than other religions, and ISIS/ISIL/IS (whatever you want to call them) aren't doing Islam any favours at the moment. I'm against extremism of any stripes though. The Westboro Baptists are Christian extremists. They just don't go around killing people. I'm also not a big fan of the burqa, but again, I'll defend a Muslim woman's right to choose to wear one. They key point here is choice.
I got my carpets cleaned yesterday by an ethnic couple. I like accents, and I was trying to pick theirs. I thought they may have been Turkish. It turned out they were Kurdish. Whenever I hear "Kurd" I habitually stick "Bosnian" in front of it after the Bosnian War that happened in my childhood. Turns out I wasn't listening properly, and that was actually "Serb". Now I feel dumb, but I digress.
I got chatting with the lady while her husband did the work. I got a refresher on where most Kurds are/were (Northern Iraq) and we talked about Sunni versus Shia Islam, and how they differed. I learned a bit yesterday, and I'll have to have a proper read of the Wikipedia article I just linked to, because I suspect I'll learn a lot more.
We briefly talked about burqas, and she said that because they were Sunni, they were given the choice, and they chose not to wear it. That's the sort of Islam that I support. I suspect a lot of the women running around in burqas don't get a lot of say in it, but I don't think banning it outright is the right solution to that. Those women need to feel empowered enough to be able to cast off their burqas if that's what they want to do.
I completely agree that a woman in a burqa entering a secure place (for example Parliament House) needs to be identifiable (assuming that identification is verified for all entrants to Parliament House). If it's not, and they're worried about security, that's what the metal detectors are for. I've been to Dubai. I've seen how they handle women in burqas at passport control. This is an easily solvable problem. You don't have to treat burqa-clad women as second class citizens and stick them in a glass box. Or exclude them entirely.
Recently, as part of the anti-women #GamerGate campaign, a set of awful humans convinced Intel to terminate an advertising campaign because the site hosting the campaign had dared to suggest that the sexism present throughout the gaming industry might be a problem. Despite being awful humans, it is absolutely their right to request that a company choose to spend its money in a different way. And despite it being a dreadful decision, Intel is obviously entitled to spend their money as they wish. But I'm also free to spend my unpaid spare time as I wish, and I no longer wish to spend it doing unpaid work to enable an abhorrently-behaving company to sell more hardware. I won't be working on any Intel-specific bugs. I won't be reverse engineering any Intel-based features. If the backlight on your laptop with an Intel GPU doesn't work, the number of fucks I'll be giving will fail to register on even the most sensitive measuring device.
On the plus side, this is probably going to significantly reduce my gin consumption.
 In the spirit of full disclosure: in some cases this has resulted in me being sent laptops in order to figure stuff out, and I was not always asked to return those laptops. My current laptop was purchased by me.
 I appreciate that there are some people involved in this campaign who earnestly believe that they are working to improve the state of professional ethics in games media. That is a worthy goal! But you're allying yourself to a cause that disproportionately attacks women while ignoring almost every other conflict of interest in the industry. If this is what you care about, find a new way to do it - and perhaps deal with the rather more obvious cases involving giant corporations, rather than obsessing over indie developers.
For avoidance of doubt, any comments arguing this point will be replaced with the phrase "Fart fart fart".
 Except for the purposes of finding entertaining security bugs
This is my monthly summary of my free software related activities. If you’re among the people who made a donation to support my work (26.6 €, thanks everybody!), then you can learn how I spent your money. Otherwise it’s just an interesting status update on my various projects.Django 1.7
Since Django 1.7 got released early September, I updated the package in experimental and continued to push for its inclusion in unstable. I sent a few more patches to multiple reverse build dependencies who had asked for help (python-django-bootstrap-form, horizon, lava-server) and then sent the package to unstable. At that time, I bumped the severity of all bug filed against packages that were no longer building with Django 1.7.
Later in the month, I made sure that the package migrated to testing, it only required a temporary removal of mumble-django (see #763087). Quite a few packages got updated since then (remaining bugs here).Debian Long Term Support
I have worked towards keeping Debian Squeeze secure, see the dedicated article: My Debian LTS report for September 2014.Distro Tracker
The pace of development on tracker.debian.org slowed down a bit this month, with only 30 new commits in the repository, closing 6 bugs. Some of the changes are noteworthy though: the news now contain true links on bugs, CVE and plain URLs (example here). I have also fixed a serious issue with the way users were identified when they used their Alioth account credentials to login via sso.debian.org.
On the development side, we’re now able to generate the test suite code coverage which is quite helpful to identify parts of the code that are clearly missing some tests (see bin/gen-coverage.sh in the repository).Misc packaging
Publican. I have been behind packaging new upstream versions of Publican and with the freeze approaching, I decided to take care of it. Unfortunately, it wasn’t as easy as I had hoped and found numerous issues that I have filed upstream (invalid public identifier, PDF build fails with noNumberLines function available, build of the manual requires the network). Most of those have been fixed upstream in the mean time but the last issue seems to be a problem in the way we manage our Docbook XML catalogs in Debian. I have thus filed #763598 (docbook-xml: xmllint fails to identify local copy of docbook entities file) which is still waiting an answer from the maintainer.
Package sponsorship. I have sponsored new uploads of dolibarr (RC bug fix), tcpdf (RC bug fix), tryton-server (security update) and django-ratelimit.
GNOME 3.14. With the arrival of GNOME 3.14 in unstable, I took care of updating gnome-shell-timer and also filed some tickets for extensions that I use: https://github.com/projecthamster/shell-extension/issues/79 and https://github.com/olebowle/gnome-shell-timer/issues/25
git-buildpackage. I filed multiple bugs on git-buildpackage for little issues that have been irking me since I started using this tool: #761160 (gbp pq export/switch should be smarter), #761161 (gbp pq import+export should preserve patch filenames), #761641 (gbp import-orig should be less fragile and more idempotent).Thanks
See you next month for a new summary of my activities.
Today, I received my Acer Chromebook 13, in the glorious FullHD variant with 4GB RAM. For those of you who don’t know it, the Acer Chromebook 13 is a 13.3 inch chromebook powered by a Tegra K1 cpu.
This version cannot be ordered currently, only pre-orders were shipped yesterday (at least here in Germany). I cannot even review it on Amazon (despite having it bought there), as they have not enabled reviews for it yet.
The device feels solidly built, and looks good. It comes in all-white matte plastic and is slightly reminiscent of the old white MacBooks. The keyboard is horrible, there’s no well defined pressure point. It feels like your typing on a pillow. The display is OK, an IPS would be a lot nicer to work with, though. Oh, and it could be brighter. I do not think that using it outside on a sunny day would be a good idea. The speakers are loud and clear compared to my ThinkPad X230.
The performance of the device is about acceptable (unfortunately, I do not have any comparison in this device class). Even when typing this blog post in the visual wordpress editor, I notice some sluggishness. Opening the app launcher or loading the new tab page while music is playing makes the music stop for or skip a few ms (20-50ms if I had to guess). Running a benchmark in parallel or browsing does not usually cause this stuttering, though.
There are still some bugs in Chrome OS: Loading the Play Books library the first time resulted in some rendering issues. The “Browser” process always consumes at least 10% CPU, even when idling, with no page open; this might cause some of the sluggishness I mentioned above. Also watching Flash videos used more CPU than I expected given that it is hardware accelerated.
Finally, Netflix did not work out of the box, despite the Chromebook shipping with a special Netflix plugin. I always get some unexpected issue-type page. Setting the user agent to Chrome 38 from Windows, thus forcing the use of the EME video player instead of the Netflix plugin, makes it work.
I reported these software issues to Google via Alt+Shift+I. The issues appeared on the current version of the stable channel, 37.0.2062.120.
What’s next? I don’t know.
Filed under: Uncategorized
At my university, we recently held an exam that covered a bit of Haskell, and a simple warm-up question at the beginning asked the students to implement last :: [a] -> a. We did not demand a specific behaviour for last .
This is a survey of various solutions, only covering those that are actually correct. I elided some variation in syntax (e.g. guards vs. if-then-else).
Most wrote the naive and straightforward code:
last [x] = x last (x:xs) = last xs
Then quite a few seemed to be uncomfortable with pattern-matching and used conditional expressions. There was some variety in finding out whether a list is empty:
last (x:xs) | null xs == True = x | otherwise = last xs last (x:xs) | length (x:xs) == 1 = x | otherwise = last xs last (x:xs) | length xs == 0 = x | otherwise = last xs last xs | lenght xs > 1 = last (tail xs) | otherwise = head xs last xs | lenght xs == 1 = head xs | otherwise = last (tail xs) last (x:xs) | xs ==  = x | otherwise = last xs
The last one is not really correct, as it has the stricter type Eq a => [a] -> a. Also we did not expect our students to avoid the quadratic runtime caused by using length in every step.
The next class of answers used length to pick out the right elemet, either using (!!) directly, or simulating it with head and drop:
last xs = xs !! (length xs - 1) last xs = head (drop (length xs - 1) xs)
There were two submissions that spelled out an explicit left folding recursion:
last (x:xs) = lastHelper x xs where lastHelper z  = z lastHelper z (y:ys) = lastHelper y ys
And finally there are a few code-golfers that just plugged together some other functions:
last x = head (reverse x)
Quite a lot of ways to write last!
For quite some time, I've been running translation server for projects where I am involved at l10n.cihar.com. Historically this used Pootle, but when we had more and more problems with that, I've written Weblate and started to use it there.
As Weblate become more popular and I got requests to help people with running it, I've realized that it might be good idea to run server where I could host translations for other projects. This is when Hosted Weblate was born.
After some time, I've realized that it really makes little sense to run and maintain separate servers for these sets of projects, so I've decided to move all translations from l10n.cihar.com to hosted.weblate.org. Today this move was completed by moving translations for phpMyAdmin.
Starting an article with self laudation might be bad style, but this month I was busy as a bee and could accept 312 packages, 75 packages more than last month. 34 times I contacted the maintainer to ask a question and 51 times I had to reject a package. These numbers remain constant.
The number of packages in NEW dropped to about 180. If you want your package included in Jessie, please double-check it and upload an improved version.
All in all I got assigned a workload of 11h for September and I spent these hours to upload new versions of
- [DLA 43-1] eglibc security update
- [DLA 64-1] curl security update
- [DLA 67-1] php5 security update
- [DLA 68-1] fex security update
I further tried to upload a new version of python-django. Unfortunately I could not figure out why some of the internal tests of the package failed. So I fowarded the package to Raphael, who could resolve all issues.
The Squeeze version of PHP5 contains 140 patches. According to quilt 47 of them are identified to be already in 5.3.29 and 48 patches need to be revised. Some of them are really big, rather old and not really supported in the new 5.3.n version.
As nobody will talk about Squeeze LTS in a few months, I better better avoid the hassle of preparing a point release and concentrate only on security patches further on.
This month I uploaded a new version of net-dns-fingerprint, which closes an RC bug. Unfortunately the package does not work with all DNS servers anymore. Patches or hints what happened are very welcome .
If you would like to support my Debian work you could either be part of the Freexian initiative (see above) or consider to send some bitcoins to 1JHnNpbgzxkoNexeXsTUGS6qUp5P88vHej. Contact me at firstname.lastname@example.org if you prefer another way to donate. Every kind of support is most appreciated.
When I first read that Linux kernel developer Valerie Aurora would be changing careers to work full-time on behalf of women in open source communities, I never imagined it would lead so far so fast. Today, The Ada Initiative is a non-profit organization with global reach, whose programs have helped create positive change for women in a wide range of communities beyond open source. Building on this foundation, imagine how much more they can do in the next four years! That’s why I’m pledging my continuing support, and asking you to join me.
For the next 7 days, I will personally match your donations up to $4,096. My employer, Heroku (Salesforce.com), will match my donations too, so every dollar you contribute will be tripled!
My goal is that together we will raise over $12,000 toward The Ada Initiative’s 2014 fundraising drive.
Since about 1999, I had been working in open source communities like Debian and Ubuntu, where women are vastly underrepresented even compared to the professional software industry. Like other men in these communities, I had struggled to learn what I could do to change this. Such a severe imbalance can only be addressed by systemic change, and I hardly knew where to begin. I worked to raise awareness by writing and speaking, and joined groups like Debian Women, Ubuntu Women and Geek Feminism. I worked on my own bias and behavior to avoid being part of the problem myself. But it never felt like enough, and sometimes felt completely hopeless.
Perhaps worst of all, I saw too many women burning out from trying to change the system. It was often taxing just to participate as a woman in a male-dominated community, and the extra burden of activism seemed overwhelming. They were all volunteers, doing this work in evenings and weekends around work or study, and it took a lot of time, energy and emotional reserve to deal with the backlash they faced for speaking out about sexism. Valerie Aurora and Mary Gardiner helped me to see that an activist organization with full-time staff could be part of the solution. I joined the Ada Initiative advisory board in February 2011, and the board of directors in April.
Today, The Ada Initiative is making a difference not only in my community, but in my workplace as well. When I joined Heroku in 2012, none of the engineers were women, and we clearly had a lot of work to do to change that. In 2013, I attended AdaCamp SF along with my colleague Peter van Hardenberg, joining the first “allies track”, open to participants of any gender, for people who wanted to learn the skills to support the women around them. We’ve gone on to host two ally skills workshops of our own for Heroku employees, one taught by Ada Initiative staff and another by a member of our team, security engineer Leigh Honeywell. These workshops taught interested employees simple, everyday ways to take positive action to challenge sexism and create a better workplace for women. The Ada Initiative also helped us establish a policy for conference sponsorship which supports our gender diversity efforts. Today, Heroku engineering includes about 10% women and growing. The Ada Initiative’s programs are helping us to become the kind of company we want to be.
I attended the workshop with a group of Heroku colleagues, and it was a powerful experience to see my co-workers learning tactics to support women and intervene in sexist situations. Hearing them discuss power and privilege in the workplace, and the various “a-ha!” moments people had, were very encouraging and made me feel heard and supported.
– Leigh Honeywell
I’ve reported a bug on bridge-utils, but perhaps someone has already seen this and has a fix. My virtual IPv6 machines often lose connectivity from time to time. Tracking this down, it seems that the router sends Neighbor Solicitations (IPv6 ARPs basically). The physical interface of the bridge group receives it, but the vnet0 one does not.
Using tshark I can see the pings on vnet0 but on br0 and eth1 I see the ping requests and the NS packets. So there is something odd going on with the bridge interface.
If I remove and add the vnet0 interface from the bridge group, the connectivity comes back.
In the beginning of September I spent quite some time fixing bugs in the Debian Security Tracker, which now, thanks to the awesome CSS from Ulrike looks really good and professional! There are still some bugs to fix and features I'd like to add, eg. the ability to in- and exclude (old)oldstable/lts/backports/nodsa/EOL everywhere. It was fun to squash #742382 #642987 #742855 #762214 #479727 #610220 #611163 and #755800!
And then I also discovered dgit, as in "I've used it for the first time". It was so great, I immediatly did a backport of it and uploaded it to wheezy-backports.
So during the last month these uploads I made to squeeze-lts:
- DLA 56-1 for wordpress, fixing CVE-2014-2053 CVE-2014-5204 CVE-2014-5205 CVE-2014-5240 CVE-2014-5265 CVE-2014-5266
- DLA 57-1 for libstruts1.2-java, fixing CVE-2014-0114
- DLA 60-1 for icinga, fixing CVE-2013-7108 and CVE-2014-1878
- DLA 61-1 for libplack-perl, fixing CVE-2014-5269
- DLA 62-1 for nss, fixing CVE-2014-1568
- DLA 66-1 for apache2, fixing CVE-2013-6438 CVE-2014-0118 CVE-2014-0226 CVE-2014-0231
Plus I filed #762715, asking the devscripts maintainers to 'add an --lts option to dch' and #763339 against lintian: please 'recognize "squeeze-lts" as suite'.
Here's three things you could do to contribute to Debian LTS:
- help fixing #751403 - that is, mention Debian LTS properly on www.debian.org
- help maintaining packages in oldstable with known security vulnerabilities - eg. by testing, providing patches or confirming they affect Squeeze indeed.
- help by supporting LTS financially.
Thanks to everybody supporting LTS already!