Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 56 min 44 sec ago

Norbert Preining: 10 years TeX Live in Debian

12 January, 2016 - 06:43

I recently dug through my history of involvement with TeX (Live), and found out that in January there are a lot of “anniversaries” I should celebrate: 14 years ago I started building binaries for TeX Live, 11 years ago I proposed the packaging TeX Live for Debian, 10 years ago the TeX Live packages entered Debian. There are other things to celebrate next year (2017), namely the 10 year anniversary of the (not so new anymore) infrastructure – in short tlmgr – of TeX Live packaging, but this will come later. In this blog post I want to concentrate on my involvement in TeX Live and Debian.

Those of you not interested in boring and melancholic look-back onto history can safely skip reading this one. For those a bit interested in the history of TeX in Debian, please read on.

Debian releases and TeX systems

The TeX system of choice has been for long years teTeX, curated by Thomas Esser. Digging through the Debian Archive and combining it with changelog entries as well as personal experiences since I joined Debian, here is a time line of TeX in Debian, all to my best knowledge.

Date Version Name teTeX/TeX Live Maintainers 1993-96 <1 ? ? Christoph Martin 6/1996 1.1 Buzz ? 12/1996 1.2 Rec ? 6/1997 1.3 Bo teTeX 0.4 7/1998 2.0 Ham teTeX 0.9 3/1999 2.1 Slink teTeX 0.9.9N 8/2000 2.2 Potato teTeX 1.0 7/2002 3.0 Woody teTeX 1.0 6/2005 3.1 Sarge teTeX 2.0 Atsuhito Kohda 4/2007 4.0 Etch teTeX 3.0, TeX Live 2005 Frank Küster, NP 2/2009 5.0 Lenny TeX Live 2007 NP 2/2011 6.0 Squeeze TeX Live 2009 5/2013 7.0 Whezzy TeX Live 2012 4/2015 8.0 Jessie TeX Live 2014 ??? ??? Stretch TeX Live ≥2015

The history of TeX in Debian is thus split more or less in 10 years teTeX, and 10 years TeX Live. While I cannot check back to the origins, my guesses are that already in the very first releases (te)TeX was included. The first release I can confirm (via the Debian archive) shipping teTeX is the release Bo (June 1997). Maintainership during the first 10 years showed some fluctuation: The first years/releases (till about 2002) were dominated by Christoph Martin with Adrian Bunk and few others, who did most packaging work on teTeX version 1. After this Atsuhito Kohda with help from Hilmar Preusse and some people brought teTeX up to version 2, and from 2004 to 2007 Frank Küster, again with help of Hilmar Preusse and some other, took over most of the work on teTeX. Other names appearing throughout the changelog are (incomplete list) Julian Gilbey, Ralf Stubner, LaMont Jones, and C.M Connelly (and many more bug-reporters and fixers).

Looking at the above table I have to mention the incredible amount of work that both Atsuhito Kohda and Frank Küster have put into the teTeX packages, and many of their contributions have been carried over into the TeX Live packages. While there haven’t been many releases during their maintainership, their work has inspired and supported the packaging of TeX Live to a huge extend.

Start of TeX Live

I got involved in TeX Live back in 2002 when I started building binaries for the alpha-linux architecture. I can’t remember when I first had the idea to package TeX Live for Debian, but here is a time line from my first email to the Debian Developers mailing list concerning TeX Live, to the first accepted upload:

Date Subject/Link Comment 2005-01-11 binaries for different architectures in debian packages The first question concerning packaging TeX Live, about including pre-built binaries 2005-01-25 Debian-TeXlive Proposal II A better proposal, but still including pre-built binaries 2005-05-17 Proposal for a tex-base package Proposal for tex-base, later tex-common, as basis for both teTeX and TeX live packages 2015-06-10 Bug#312897: ITP: texlive ITP bug for TeX Live 2005-09-17 Re: Take over of texinfo/info packages Taking over texinfo which was somehow orphaned started here 2005-11-28 Re: texlive-basic_2005-1_i386.changes REJECTED My answer to the rejection by ftp-master of the first upload. This email sparked a long discussion about packaging and helped improve the naming of packages (but not really the packaging itself). 2006-01-12 Upload of TeX Live 2005-1 to Debian The first successful upload 2006-01-22 Accepted texlive-base 2005-1 (source all) TeX Live packages accepted into Debian/experimental

One can see from the first emails that at that time I didn’t have any idea about Debian packaging and proposed to ship the binaries built within the TeX Live system on Debian. What followed was first a long discussion about whether there is any need for just another TeX system. The then maintainer Frank Küster took a clear stance in favor of including TeX Live, and after several rounds of proposals, tests, rejections and improvements, the first successful upload of TeX Live packages to Debian/experimental happened on 12 January 2006, so exactly 10 years ago.

Packaging

Right from the beginning I used a meta-packaging approach. That is, instead of working directly with the source packages, I wrote (Perl) scripts that generated the source packages from a set of directives. There were several reasons why I choose to introduce this extra layer:

  • The original format of the TeX Live packaging information (tpm) were xml files that were parsed with an XML parser (libxml). I guess (from what I have seen over the years) I was the only one ever properly parsing these .tpm files for packaging.
  • TeX Live packages were often reshuffled, and Debian package name changed, which would have caused a certain level of pain for the creation of original tar files and packaging.
  • Flexibility in creating additional packages and arbitrary dependencies

Till now I am not 100% sure whether it was the best idea, but the scripts remain in place till now, only adapted to the new packaging paradigm in TeX Live (without xml) and adding new functionality. This allows me to just kick off one script that does do all the work, including building .orig.tar.gz, source packages, binary packages.

For those interested to follow the frantic activity during the first years, there is a file CHANGES.packaging which for the years from 2005 to 2011 documents very extensively the changes I made in these years. I don’t want to count the hours the went into all this

Development over the years

TeX Live 2005 was just another TeX system but not the preferred one in Debian Etch and beyond. But then in May 2006, Thomas Esser announced the end of development of teTeX, which cleared the path for TeX Live as main TeX system in Debian (and the world!). The next release of Debian, Lenny (1/2009), already carried only TeX Live. Unfortunately it was only TeX Live 2007 and not 2008, mostly due to me having been involved in rewriting the upstream infrastructure based on Debian package files instead of the notorious xml files. This took quite a lot of attention and time from Debian away to upstream development, but this will be discussed in a different post.

Similarly, the release of TeX Live included in Debian Squeeze (released 2/2011) was only TeX Live 2009 (instead of 2010), but since then (Wheezy and Jessie) the releases of TeX Live in Debian were always the latest released ones.

Current status

Since about 2013 I am trying to keep a regular schedule of new TeX Live packages every month. These helps me to keep up with the changes in upstream packaging and reduces the load of packaging a new release of TeX Live. It also bring to users of unstable and testing a very up-to-date TeX system, where packages at most lack 1 month of behind the TeX Live net updates.

Future

As most of the readers here know, besides caring for TeX (Live) and related packages in Debian, I am also responsible for the TeX Live Manager (tlmgr) and most of upstream’s infrastructure including network distribution. Thus, my (spare, outside work) time needs to be distributed between all these projects (and some others) which leaves less and less time for Debian packaging. Fortunately the packaging is in a state that making regular updates once a month is less of a burden, since most steps are automatized. What is still a bit of a struggle is adapting the binary package (src:texlive-bin) to new releases. But also this has become simpler due to less invasive changes over the years.

All in all, I don’t have many plans for TeX Live in Debian besides keeping the current system running as it is.

Search for and advise to future maintainers and collaborators

I would be more than happy if new collaborators appear, with fresh ideas and some spare time. Unfortunately, my experience over these 10 years with people showing up and proposing changes (anyone remembers the guy proposing a complete rewrite in ML or so?) is that nobody really wants to invest time and energy, but search for quick solutions. This is not something that will work with a package like TeX Live, sized of several gigabyte (the biggest in the Debian archive), and complicated inner workings.

I advise everyone being interested in helping to package TeX Live for Debian, to first install normal TeX Live from TUG, get used to what actions happen during updates (format rebuilds, hyphenation patters, map file updates). One does not need to have a perfect understanding of what exactly happens down there in the guts (I didn’t have in the beginning, either), but if you want to help packaging and never heard about what format dumps or map files are, then this might be a slight obstacle.

Conclusion

TeX Live is the only TeX system in wide use across lots of architectures and operating systems, and the only comparable system, MikTeX, is Windows specific (also there are traces of ports to Unix). Backed by all the big user groups of TeX, TeX Live will remain the prime choice for the foreseeable future, and thus also TeX Live in Debian.

Carl Chenet: Extend your Twitter network with Retweet

12 January, 2016 - 06:00

Retweet is self-hosted app coded in Python 3 allowing to retweet all the statuses from a given Twitter account to another one. Lots of filters can be used to retweet only tweets matching given criterias.

Retweet 0.8 is available on the PyPI repository and is already in the official Debian unstable repository.

Retweet is in production already for Le Journal Du hacker , a French FOSS community website to share and relay news and LinuxJobs.fr , a job board for the French-speaking FOSS community.

The new features of the 0.8 allow Retweet to manage the tweets given how old they are, retweeting only if :

  • they are older than a user-specified duration with the parameter older_than
  • they are younger than a user-specified duration with the parameter younger_than

Retweet is extensively documented, have a look at the official documentation to understand how to install it, configure it and use it.

What about you? does Retweet allow you to develop your Twitter account? Let your comments in this article.


Scott Kitterman: Python3.5 is default python3 in sid

12 January, 2016 - 05:40

As of today, python3 -> python3.5.  There’s a bit of a transition, but fortunately because most extensions are packaged to build for all supported python3 versions, we started this transition at about 80% done.  Thank you do the maintainers that have done that.  It makes these transitions much smoother.

As part of getting ready for this transition, I reviewed all the packages that needed to be rebuilt for this stage of the transition to python3.5 and a few common errors stood out:

  1. For python3 it’s {python3:Depends} not {python:Depends}.
  2. Do not use {python3:Provides}.  This has never been used for python3 (go read the policy if you doubt me [1]).
  3. Almost for sure do not use {python:Provides}.  The only time it should still be used is if some package depends on python2.7-$PACKAGE. It would surprise me if any of these are left in the archive.  If so, since python2.7 is the last python2, then they should be adjusted.  Work with the maintainer of such an rdepend and once it’s removed, then drop the provides.
  4. Do not use XB-Python-Version.  We no longer use this to manage transitions (there won’t be any more python transitions).
  5. Do not use XB-Python3-Version.  This was never used.

Now that we have robust transition trackers [2], the purpose for which XB-Python-Version is obsolete.

In other news, pysupport was recently removed from the archive.  This means that, following the previous removal of pycentral, we finally have one and only one python packaging helper (dh-python) that supports both python and python3.  Thanks to everyone who made that possible.

 

[1] https://www.debian.org/doc/packaging-manuals/python-policy/

[2] https://release.debian.org/transitions/html/python3.5.html

Sven Hoexter: grep | wc -l

12 January, 2016 - 04:53

I did some musings on my way home about a line of shell scripting similar to

if [ `grep foobar somefile |  wc -l` -gt 0 ]; then ...

Yes it's obvious that silencing grep and working with the return code is way more elegant and the backticks are also deprecated, or at least discouraged, nowadays. For this special case "grep -c" is not the right replacement. Just in case.

So I wanted to know how widespread the "grep | wc -l" chaining actually is. codesearch.d.n to the rescue! At least in some codebases it seems to be rather widespread, so maybe "grep -c" is not POSIX compliant? Nope. Traveling back a few years and looking at a somewhat older manpage also lists a "-c" option. At least for now I doubt that this is some kind of backwards compatiblity thing. Even busybox supports it.

As you can obviously deduce from the matching lines, and my rather fuzzy search pattern, there are valid cases among the result set where "grep" is just the first command and some "awk/sed/tr" (you name it) is in between the final "wc -l". But quite some "| wc -l" could be replaced by a "-c" added to the "grep" invocation.

Vincent Fourmond: Ghost in the machine: faint remanence of screen contents across reboots in a Macbook pro retina

12 January, 2016 - 04:31
As was noted a few times before, I happen to own a Macbook Pro Retina laptop I mostly use under Linux. I had noticed from time to time weird mixes between two screens, i.e. I would be looking at a website, but, in some areas with uniform colors, I would see faint traces of other windows currently opened on another screen. These faint traces would not show up in a screenshot. It never really bothered me, and I attributed that to a weird specificity of the mac hardware (they often do that) that was not well handled by the nouveau driver, so I had simply dismissed that. Until, one day, I switch off the computer, switch back on, boot to MacOS and see this as a boot screen:
Here is a close-up view of the top-left corner of the screen:
If you look carefully, you can still see the contents of the page I was browsing just before switching off the computer ! So this problem is not Linux-specific, it also affects MacOS... To be honest, I don't have a clue about what's happening here, but it has to be a serious hardware malfunction. How can two video memory regions be composed upon display without the computer asking explicitly for it ? Why does that problem survives a reboot ? I mean, someone switches on my computer and can see the last thing I did on it ? I could actually read the address line without difficulty, although you'll have to take my word for it, since the picture does not show it that well. That's scary...

Thadeu Lima de Souza Cascardo: GNU on Smartphones (part II)

11 January, 2016 - 21:58

Some time ago, I wrote how to get GNU on a smartphone. This is going to be a long discussion on why and how we should work on more operating systems for more devices.

On Android

So, why should we bother if we already have Android, some might ask? If it's just because of some binary blobs, one could just use Replicant, right?

Well, one of the problems is that Android development is done in hiding, and pushed downstream when a new version is launched. There is no community behind that anyone can join. Replicant ends up either following it or staying behind. It could do a fork and have its own innovations. And I am all for it. But the lack of manpower for supporting devices and keeping up with the latest versions and security fixes already takes most of the time for the one or two developers involved.

Also, Android has a huge modularity problem, that I will discuss further below. But it's hard to replace many components in the system, unless you replace them all. And that also causes the problem that applications can hardly share extra components.

I would rather see Debian running on my devices and supporting good environments and frameworks for all kinds of devices, like phones, tablets, TVs, cars, etc. It's developed by a community I can join, it allows a diverse set of frameworks and environments, and it's much easier to replace single components on such a system.

On Copyleft Hardware

I get it. Hardware is not free or libre. Its design is. I love the concept of copyleft hardware, where one applies the copyleft principles to a hardware design. Also, there is the concept of Hardware that Respects Your Freedom, that is, one that allows you to use it with only free software.

Note that RYF hardware is not necessarily copyleft hardware. In fact, most of the time, the original vendor has not helped at all, and it required reverse engineering efforts by other people to be able to run free software on those systems.

My point here is that we should not prevent people from running free software on hardware that does not RYF or that is not copyleft. We should continue the efforts of reverse engineering and of pushing hardware vendors to not only ship free software for their hardware, but also release their design under free licenses. But in the meantime, we need to make free software flourish in the plethora of devices on the hands of so many people around the world.

On Diversity

I won't go into details on this post about two things. One topic I love is retrocomputing and how Linux supported so many devices, and how many free or quasi-free operating systems ran on so many of them in the past. I could mention ucLinux, Familiar, PDAs, Palmtops, Motorolas, etc. But I will leave it to another time and go from Openmoko and Maemo forward.

The other topic is application scalability. Even Debian with so many software available does not ship all free software there is available. And it doesn't support all third-party services out there. How can we solve that? It has to do with platforms, protocols, open protocols, etc. I will not go into that today.

Because I believe that either way, it's healthy for our society to have diversity. I believe we should have other operating systems available for our devices. Even if application developers will not develop for all of them. That is already the case today. There are other ways to fix that, when that needs fixing. Sometimes, it's sufficient that you can have your own operating system on your device, that you can customize it, enhance it and share it with friends.

And also, that would allow for innovation. It would make possible that some other operating system would have enough market share on some other niche. Other than Android and iOS, for example. But that requires that we can support that operating system on different devices.

And that is the scalability that I want to talk about. How to support more devices with less effort.

Options

But before I go into that, let me write more about the alternatives we have out there. And some of the history around it.

Openmoko

Openmoko developed a set of devices that had some free design. And it has some free operating systems running on top. The community developed a lot of their own as well. Debian could (still can) run on it. There is SHR, which uses FSO, a mobile middleware based on D-Bus.

It even spawned other devices to be developed, like GTA04, a successor board, that can be shipped inside a Neo Freerunner case.

Maemo, Moblin, Meego and Tizen

I remember the announcement of Nokia N770. During FISL in 2005, I even criticized a lot, because it shipped with proprietary drivers. And applications. But it was the first time we heard of Maemo. It was based on Debian and GNOME. The GNOME Mobile initiative was born from that, I believe, but died later on.

But with the launch of N900, and later events, like the merge of Moblin with Maemo, to create Meego, we all had an operating system that was based on community developed components, that had some community participation, and was much more like the systems we were used to. You could run gcc on N900. You could install Das U-Boot and have other operating systems running there.

But Nokia has gone a different path. Intel has started Tizen with Samsung. There is so much legacy there, that could still be developed upon. I am just sad that Jolla decided to put proprietary layers on top of that, to create SailfishOS.

But we still have Mer, Nemo. But it looks like Cordia is gone. At least, (http://cordiahd.org) seems to have been registered by a domain seller.

Not to forget, Neo900, a project to upgrade the board, in the same vein as GTA04.

FirefoxOS and Ubuntu

What can I say about FirefoxOS and Ubuntu Phone? In summary, I think we need more than Web, and Canonical has a history of not being too community oriented as we'd like.

I won't go too much here in what I think about the Web as a platform. But I think we need a system that has more options for platforms. I haven't participated in projects directed by Mozilla either, so I can't say much about that.

Ubuntu Phone should be a system more like what we are used to. But Canonical is going to set the directions to the project, it's not a community project.

Nonetheless, I think they add to the diversity, and users should be able to try them, or their free parts or versions. But there are challenges there, that I think need to be discussed.

So, both of these systems are based on Android. They don't use Dalvik or the application framework. But they use what is below that, which means device drivers in userspaces, like RIL, for the modem drivers, and EGL, for graphics drivers. The reason they do that is to build on top of all the existing vendor support for Android. If a SOC and phone vendor already supports Android, there should be not much needed to do to support FirefoxOS or Ubuntu Phone.

In practice, this has not benefited the users or the projects. Porting should be as simple as getting a generic FirefoxOS or Ubuntu Phone image and run it on top of your Android or CyanogenMod port.

Porting usually requires doing all the same work as porting Android. Even though one should be able to take advantage of existing ports, it still requires doing a lot of integration work and building an entire image, most of the time, including the upper layers, that should be equal in all devices. This process should require at most creating a new image based on other images and loading it on the device. I will discuss more about this below.

I can't forget to mention the Matchstick TV project. It is based on FirefoxOS. I think it would have much more changes to succeed if it was easier for testers to have images available for all of their gadgets capable of running some form of Android.

Replicant

And then we have Replicant. It has a lot of the problems Android has. Even so, it's a tremendously important project. First, it offers a free version of Android, removing all proprietary pieces. This is very good for the community. But more than that, it tries to free the devices beyond that.

What the project has been done is reverse engineer some of the pieces, mostly the modem drivers. That allows other projects to build upon that work, and support those modems. Without that, there is no telephony or celullar data available on any of these devices. Not without proprietary software, that is.

They also have been working on free bootloaders, which is another important step for a completely free system.

Next steps

There are many challenges here, but I believe we should work on a set of steps that make it more palatable and produce intermediary results that can be used by users and other projects.

One of the goals should be good modularity. The ability to replace some pieces with the least trouble necessary. The update of a single framework library should be just a package install, instead of building the entire framework again. If there is ABI breakage, users should still be able to have access to binaries (and accompanying source code) and only need to update the library and its reverse dependencies. If one layer does not have these properties, at least it should be possible to update this big chunk without interfering with the other layers.

One example is FirefoxOS and Ubuntu Phone. Even if there is a big image with all system userspace and middleware, the porting, install and upgrade process should allow the user to keep the applications and to leverage porting already done by similar projects. Heck, these two projects should be able to share porting efforts without any trouble.

So what follows is a quick discussion on Android builds, then suggestions on how to approach some of the challenges.

On Android build

The big problem with Android build is its non-modularity. You need to build the world and more to get a single image in the end. Instead of packages that could be built independently, that generated modular package images, that could be bundled in a single system image. Better yet would be the ability to pick only those components that matter.

Certainly, portability would be just as interesting, being able to build certain components on top a GNU system with GNU libc.

At times, it seems like this is done by design, to make it difficult for "fragmentation" and competition. Basically, making it difficult to exercise the freedom to modify the software and share your modifications with others.

Imagine the possibilities of being able to:

  • Build Dalvik and the framework on a GNU system in order to run Android programs on GNU systems.
  • Or build a single framework library that would be useful for your Java project.
  • Build only cutils, because you want to use that on your project.
  • Build the HAL and be able to use Android drivers on your system.
  • Build only RIL and use the modem drivers with ofono.

There are some dangers in promoting the use of Android like this. Since it promotes proprietary drivers, relying on such layers instead of better community oriented layers means giving an advantage to the first. So, the best plan would be to replace those layers, when they are used, for things like ofono and Wayland, for example.

But, in the meantime, making it easier for users to experiment with FirefoxOS, Ubuntu Phone, Matchstick TV, Replicant on other devices, without resorting to building the entire Android system, would be a very nice thing.

It is possible that there are some challenges with respect to API and ABI. That these layers are too intermingled that some changes present in one port would prevent FirefoxOS to run on top of it without changes in either of the layers. I can't confirm that is the case, but can't deny the possibility.

Rooting

One of the challenges we have that may have trouble in scaling is rooting devices.

Unfortunately, most of the devices are owned by the vendor or carrier, not the user. The user is prevented from replacing software components in the system, by removing its permissions from replacing most of the files in the filesystem, or writing to block devices, or change boot parameters or write to the storage devices.

Fortunately, there are many documents, guides, howtos, and whatnots out there instructing on how to root a lot of devices. In some cases, it depends on software bugs that can be patched by the users as instructed by vendors.

Certainly, favoring devices that are built to allow rooting, hacking, etc, is a good thing. But we should still help those users out there who do not have such devices, and allow them to run free software on them.

Booting

Then, comes booting, which is a large discussion on its own, and also related to rooting, or how to allow users to replace their software.

First, we have the topic of bootloaders, which are usually not free, and embedded in the chip, not on external storage. So, there are those pieces of the bootloader which we can replace more easily and those that we are not, because they would require changing a piece of ROM, for example.

Das U-Boot would be one of the preferred options for a system. It supports a lot of hardware platforms out there, a lot of storage devices and filesystems, network booting, USB and serial booting, and a lot of other things. Porting is not an easy task, but it is possible.

Meanwhile, using the bootloaders that ship with the system, when possible, would allow a path where other pieces of the system would be free.

One of the challenges here is building single images that could boot everywhere. The lack of a single ARM boot environment is a curse and a blessing. It makes it hard to have this single boot image, but, on the other hand, it has allowed so much of the information for such systems to be free, to be embedded in copyleft software, instead of having blobs executed by our free systems, like ACPI encourages.

Device trees have pushed this portability issue from the kernel to the bootloaders. Possibily encouraging vendors to now hide and copyright this information in the form of proprietary device trees. But it has made it easier for single images.

In practice, we still need to see a lot of devices out there supporting this single kernel scenario. And this mechanism only brings us the possibility of a single kernel. We still need to ship bootable images that have pieces of bootloaders that are not as portable.

This has caused lots of operating systems out there to be built for a single board. Or to support just a small set of boards. I struggle with the concept. Why are we not able to mix device-specific pieces with generic pieces and get a resulting image that will boot on our choice of board? Why does every project need to do all the porting work again, repeating the efforts? Or have one entier ISO image for every supported board? Check how Geexbox does it, for an example.

Fortunately, I see Debian going in a good direction. Here, one can see how it instructs the user to combine a system-dependent part with a system-independent part.

Installation

Which brings us to the installation challenge. We should make this easy and also customizable by the user. Every project might have its own method, and that is part of the diversity that we should allow.

The great challenge here is handling rooting and booting for the user. But it's also possible to leave that as separate efforts, as it would be nice to have good installers as we have in the desktop and server environments.

Linux

Linux is one of the most ported kernels out there. Of course, it would be healthy to the diversity I propose that other kernels and systems out there should work on those mobile devices. NetBSD, for example, has a reputation of being very portable and ported to many platforms. And GNU runs on many of them, either together with the original systems, or as the main system on top of other kernels, as proven by the Debian GNU/kFreeBSD port, which uses GNU libc.

But, though I would love to see HURD running directly on those devices, Linux and its derivatives are already running on them. Easier to tackle is to get Linux-libre running on those systems.

But though this looks like one of the easier tasks at hand, it is not. If you run a derivative version from Linux, provided by the vendor, things should go smooth. But most of the time, the vendor does not provide patches upstream, and leave their fork to rot, if you are lucky, in the earlier 3.x series.

And some times, there is not even source code available. There are vendors out there who do not respect the GPL and that is one of the reasons why GPL enforcement is important. I take this opportunity to call attention to the work of Software Freedom Conservancy and its current efforts to raise funds to continue their work.

Running an existing binary version of Linux on your device with a free userspace is part of the strategy of replacing pieces one at a time while allowing for good diversity and reaching more users.

Drivers

Then we have the matter of drivers. And in this case, it's not only Linux drivers, but drivers running in userspace. Though others exist, there are two important and common cases, which are modem and graphics drivers, both of which are usually provided by Android frameworks, and that other systems try to leverage, instead of using alternatives that are more community friendly.

In the case of modem drivers, Android uses its RIL. Besides not using a Radio or Modem at all, there are two strategies for free versions. One is using free versions of the drivers for RIL. That's what Replicant does, because, well, it uses the other Android subsystems after all. The other one is using a system that is developed by the community. ofono is one that, even though it's an Intel iniative, looks more community governed or open to community participation than any Android subsystem ever was.

As for Graphics, Canonical even built its Mir project with the goal of being able to use Android GPU drivers. Luckily, there has been a lot of reverse engineering efforts out there for a lot of those GPUs shipped with ARM SoCs.

Then, we can use Mesa, Wayland and X.org. Other option, until then, is just using a framebuffer driver, possibly one that does not need any initialization, depending on one done by the bootloader, for example.

Middleware and Shells

Plasma Active, that I just found out now is Plasma Mobile, looks like a great system. Built by folks behind KDE, we can expect great freedom to come from it. Unfortunately, it suffers from what we have been discussing here, which is lack of good device support, or shipping images that run on top of single devices, without leveraging the porting that has come before of other systems. Fortunately, that is just what I want to accomplish with this effort.

FSO, that I mentioned before, is a good middleware, that we should try to run on top of these devices. Running GNOME or Unity shell, and use rpm or deb based systems, that's all part of the plan on diversity. You could use systemd, or System V init systems, whatever gives you the kicks.

It's not an easy task, since there are so many new things on these new kinds of devices. Besides touch, there is telephony, which as I mentioned, would be a good task for ofono. As for TV sets or dongles, I would love to see OpenFlint, created by the Matchstick TV guys, flourish out there and allow me to flint stuff from my mobile phone running Debian into my TV dongle running Debian.

Project

So, is there a project out there you can start contributing to? Well, I pointed out a lot of them. All of them make part of this plan. Contributing to reverse engineering GPUs, or to Plasma Mobile, ofono, GNOME, Linux-libre, bootloaders, Debian, and so many other projects.

My plans are to contribute in the scope of Debian and make sure it works well on top of semi-liberated devices, and make sure there is a nice user interface and good applications when using either GNOME or Plasma. Integrating ofono is a good next step, but I am running ahead of myself.

Right now, I don't think there is need for an integrated effort. If you think otherwise, please let me know. If you are doing something in this direction, I would love to know.

Paul Wise reminded me to point out to the Debian Mobile, where I will probably contribute any documentation and other relevant results.

Thanks

There are some many people to thank, but I would like to remember Ian Murdock, who created Debian, one of the projects who inspires me the most. I think the best way to handle his passing is to celebrate what he has helped create, and move forward, taking more free software to more people in an easy and modular way.

I have been wanting to write something like this for a long time, so I thank Chris Webber for inspiring me on doing it.

Arturo Borrero González: Great Debian meeting!

11 January, 2016 - 17:19


Last week we finally ended with a proper Debian informal meeting at Seville.

A total amount of 9 people attended, 3 of them DDs (Aurelien Jarno, Gillem Jover, Ana Guerrero) and 1 DM (me).

The meeting started with the usual "personal references" round, and then topics ranged from how to get more people involved with Debian, to GSoC-like programs discussions, and some Debian anecdotes as well.

There were also talks about when and how future meetings should be.

This meeting was hosted by http://www.plan4d.eu/, thanks to Pablo Neira (Netfilter project head).

Some pics of the moment:



Alessio Treglia: Filling old bottles with new wine

11 January, 2016 - 15:29

 

They are filling old bottles with new wine!” This is what the physicist Werner Heisenberg heard exclaiming by his friend and colleague Wolfgang Pauli who, criticizing the approach of the scientists of the time, believed that they had been forcibly glued the notion of “quantum” on the old theory of the planetary-model of Bohr’s atom. Faced with the huge questions introduced by quantum physics, Pauli instead began to observe the new findings from a different point of view, from a new level of reality without the constraints imposed by previous theories.

Newton himself, once he theorized the law of the gravitational field, failing to place it in any of the physical realities of the time, he merely…

<Read More...>

Daniel Pocock: FOSDEM RTC Dev-room schedule published

11 January, 2016 - 13:09

If you want to help make Free Real-time Communication (RTC) with free, open source software surpass proprietary solutions this year, a great place to start is the FOSDEM RTC dev-room.

On Friday we published the list of 17 talks accepted in the dev-room (times are still provisional until the FOSDEM schedule is printed). They include a range of topics, including SIP, XMPP, WebRTC and peer-to-peer Real-time communication.

RTC will be very prominent at FOSDEM this year with several talks on this topic, including my own, in the main track.

Keith Packard: Altos1.6.2

11 January, 2016 - 12:03
AltOS 1.6.2 — TeleMega v2.0 support, bug fixes and documentation updates

Bdale and I are pleased to announce the release of AltOS version 1.6.2.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including support for our new TeleMega v2.0 board, a small selection of bug fixes and a major update of the documentation

AltOS Firmware — TeleMega v2.0 added

The updated six-channel flight computer, TeleMega v2.0, has a few changes from the v1.0 design:

  • CC1200 radio chip instead of the CC1120. Better receive performance for packet mode, same transmit performance.

  • Serial external connector replaced with four PWM channels for external servos.

  • Companion pins rewired to match EasyMega functionality.

None of these change the basic functionality of the device, but they do change the firmware a bit so there's a new package.

AltOS Bug Fixes

We also worked around a ground station limitation in the firmware:

  • Slow down telemetry packets so receivers can keep up. With TeleMega v2 offering a fast CPU and faster radio chip, it was overrunning our receivers so a small gap was introduced between packets.
AltosUI and TeleGPS applications

A few minor new features are in this release

  • Post-flight TeleMega and EasyMega orientation computations were off by a factor of two

  • Downloading eeprom data from flight hardware would bail if there was an error in a data record. Now it keeps going.

Documentation

I spent a good number of hours completely reformatting and restructuring the Altus Metrum documentation.

  • I've changed the source format from raw docbook to asciidoc, which has made it much easier to edit and to use docbook features like links.

  • The css moves the table of contents out to a sidebar so you can navigate the html format easily.

  • There's a separate EasyMini manual now, constructed by taking sections from the larger manual.

Ben Hutchings: Debian LTS work, December 2015

11 January, 2016 - 08:36

In December I carried over 15 hours from October/November and was assigned another 15 hours of work by Freexian's Debian LTS initiative. I worked a total of 20 hours despite the holidays.

I uploaded a security and bug fix update to linux-2.6 early in December, and sent DLA-360-1. I also backported several more security fixes, released in the new year. I sent several of the fixes to Willy Tarreau for inclusion in Linux 2.6.32-longterm.

I prepared an update to sudo to fix CVE-2015-5602. This turned out not to have been properly fixed upstream, so I finished the job and am now in the process of backporting and uploading fixes for all suites.

I reviewed the packages affected by CVE-2015-8614 and the upstream fix in claws-mail, and found that that was also incomplete. This resulted in another CVE ID being issued.

I had another week in the front desk role, over the new year, and triaged about 20 new issues. About half of them affected packages supported in squeeze-lts.

Updated: I also found a bug in the contact-maintainers script used by the LTS front desk. It used apt-cache show to find out the maintainers of a source package, which may result in outdated information — particularly if you configure APT to fetch squeeze sources in order to work on LTS! I modified the script to grab maintainer information out of the RDF description provided by packages.qa.debian.org (not yet implemented on tracker.debian.org). I feel there ought to be an easier way to do this, but at least I learned something about RDF.

Dirk Eddelbuettel: Rcpp 0.12.2: Keep rollin'

11 January, 2016 - 07:08

The third update in the 0.12.* series of Rcpp arrived on the CRAN network for GNU R earlier today, and has been pushed to Debian. It follows the 0.12.0 release from late July, the 0.12.1 release in September, and the 0.12.2 release in November making it the seventh release at the steady bi-montly release frequency. This release is somewhat more of a maintenance release addressing a number of small bugs and nuisances without adding any new features.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 553 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by more than fourty packages from the last release in November.

Once again, we have new first-time contributors. Kazuki Fukui corrected an issue he encountered when having CLion re-formatted some code for him. Joshua Pritikin corrected a constructor initialization. Of course, we also had several pull reports from regular contributors -- see below for a detailed list of changes extracted from the NEWS file.

Changes in Rcpp version 0.12.3 (2016-01-10)
  • Changes in Rcpp API:

    • Const iterators now CharacterVector now behave like regular iterators (PR #404 by Dan fixing #362).

    • Math operators between matrix and scalars type have been added (PR #406 by Qiang fixing #365).

    • A missing std::hash function interface for Rcpp::String has been addded (PR #408 by Qiang fixing #84).

  • Changes in Rcpp Attributes:

    • Avoid invalid function names when generating C++ interfaces (PR #403 by JJ fixing #402).

    • Insert additional space around & in function interface (PR #400 by Kazuki Fukui fixing #278).

  • Changes in Rcpp Modules:

    • The copy constructor now initialized the base class (PR #411 by Joshua Pritikin fixing #410)

  • Changes in Rcpp Repository:

    • Added a file Contributing.md providing some points to potential contributors (PR #414 closing issue #413)

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Carl Chenet: Feed2tweet 0.2: power of the command line sending your Feed RSS to Twitter

11 January, 2016 - 06:00

Feed2tweet is a self-hosted Python app to send you RSS feed to Twitter. A long descriptions about why and how to use it is available in my last post about it.

Feed2tweet is in production for Le Journal du hacker, a French Hacker News-style FOSS website.

Feed2tweet 0.2 brings a lot of new command line options, contributed by Antoine Beaupré @theanarcat. Taking a short extract of the Feed2tweet 0.2 changelog:

 

  • new command line option -r or –dry-run to simulate execution of Feed2tweet
  • new command line option -d or –debug to increase verbosity of the execution of Feed2tweet
  • new command line option -v or –verbose to follow the execution of Feed2tweet
  • new command line option –cachefile to get the path of the cache file
  • new command line option –hashtaglist to get the path of the hash tag composed by multiple words
  • new command line option -r or –rss to get the uri of the RSS feed

 

Lots of issues from the previous project was also fixed.

Using Feed2tweet? Send us bug reports/feature requests/push requests/comments about it!


Ben Hutchings: Debian LTS work, December 2015

11 January, 2016 - 05:11

In December I carried over 15 hours from October/November and was assigned another 15 hours of work by Freexian's Debian LTS initiative. I worked a total of 20 hours despite the holidays.

I uploaded a security and bug fix update to linux-2.6 early in December, and sent DLA-360-1. I also backported several more security fixes, released in the new year. I sent several of the fixes to Willy Tarreau for inclusion in Linux 2.6.32-longterm.

I prepared an update to sudo to fix CVE-2015-5602. This turned out not to have been properly fixed upstream, so I finished the job and am now in the process of backporting and uploading fixes for all suites.

I reviewed the packages affected by CVE-2015-8614 and the upstream fix in claws-mail, and found that that was also incomplete. This resulted in another CVE ID being issued.

I had another week in the front desk role, over the new year, and triaged about 20 new issues. About half of them affected packages supported in squeeze-lts.

Vasudev Kamath: Managing Virtual Network Devices with systemd-networkd

10 January, 2016 - 23:56

I've been using bridge networking and tap networking for containers and virtual machines on my system. Configuration for bridge network which I use to connect containers was configured using /etc/network/interfaces file as shown below.

auto natbr0
iface natbr0 inet static
   address 172.16.10.1
   netmask 255.255.255.0
   pre-up brctl addbr natbr0
   post-down brctl delbr natbr0
   post-down sysctl net.ipv4.ip_forward=0
   post-down sysctl net.ipv6.conf.all.forwarding=0
   post-up sysctl net.ipv4.ip_forward=1
   post-up sysctl net.ipv6.conf.all.forwarding=1
   post-up iptables -A POSTROUTING -t mangle -p udp --dport bootpc -s 172.16.0.0/16 -j CHECKSUM --checksum-fill
   pre-down iptables -D POSTROUTING -t mangle -p udp --dport bootpc -s 172.16.0.0/16 -j CHECKSUM --checksum-fill

Basically I setup masquerading and IP forwarding when network comes up using this, so all my containers and virtual machines can access internet.

This can be simply done using systemd-networkd with couple of lines, yes couple of lines. For this to work first you need to enable systemd-networkd.

systemctl enable systemd-networkd.service

Now I need to write 2 configuration file for the above bridge interface under /etc/systemd/network. One file is natbr0.netdev which configures the bridge and the natbr0.network which configures IP address and other stuff for the bridge interface.

#natbr0.netdev
[NetDev]
Description=Bridge interface for containers/vms
Name=natbr0
Kind=bridge
#natbr0.network
[Match]
Name=natbr0

[Network]
Description=IP configuration for natbr0
Address=172.16.10.1/16
IPForward=yes
IPMasquerade=yes

The IPForward in above configuration is actually redundant, when I set IPMasquerade it automatically enables IPForward. So these configuration is equivalent of what I did in my interfaces file. It also avoids me doing additional iptables usage to add masquerading rules. This pretty much simplifies handling of virtual network devices.

There are many other things which can you do with systemd-networkd, like running a DHCPServer on the interface and many other things. I suggest you to read manual pages on systemd.network(5) and systemd.netdev(5).

systemd-networkd allows you configure all type of virtual networking devices and actual network interfaces. I've not myself used it to handle actual network interfaces yet.

Andrew Shadura: Public transport map of Managua

10 January, 2016 - 23:26

Holger Levsen writes about the public transport map of Managua, Nicaragua, which is, according to him, the first detailed map of Managua’s bus network:

If you haven’t been to Managua, you might not be able to immediatly appreciate the usefulness of this. Up until now, there has been no map nor timetable for the bus system, which as you can see now easily and from far away, is actually quite big and is used by 80% of the population in a city, where the streets still have no names.

Having had a look at the map they produced, I have to admit I quite liked it:

MapaNica, the community behind said map, are raising funds to make lives of locals easier by publishing a printed version of the map and distributing it. They have already raised more than $3300 of their $7500 goal. Every further donation will help them print more maps.

Please go to support.mapanica.net and support their initiative!

Holger Levsen: 20160110-support-mapanica-net

10 January, 2016 - 20:17
Please support the public transportation map of Manugua

Some of you might remember DebConf12 in Managua, Nicaragua and the very friendly and helpful locals, who recently contacted me to tell me about their new project, so that I share this on planet Debian: A local community of openstreet map enthusiasts, of which some were involved in DebConf12, has collected for the first time detailed information about Managua's bus network!

To bring their efforts further, they will now print these maps on paper, so that even more people can use them in their daily lives.

If you haven't been to Managua, you might not be able to immediatly appreciate the usefulness of this. Up until now, there has been no map nor timetable for the bus system, which as you can see now easily and from far away, is actually quite big and is used by 80% of the population in a city, where the streets still have no names.

If this made you curious (or just brought back happy memories from 2012) please go to http://support.mapanica.net and donate some money - their campaign is running for 3 more weeks and currently they have already raised 3300 USD, enough to print some maps, but 4200 USD short of their goal. Every further donation will help to print some more maps, even something as little as 20 USD or EUR will help people in their real lifes to better understand the beast of Managua's bus route network.

Juliana Louback: NLP: Viterbi Named-Entity Tagger

10 January, 2016 - 17:06

During my MSc program, I was lucky to squeeze into Michael Collin’s NLP class. We used his Coursera course as part of the program, which I’d highly recommend.

Recently I decided to review my NLP studies and I believe the best way to learn or relearn a subject is to teach it. This is one in a series of 4 posts with a walk-through of the algorithms we implemented during the course. I’ll provide links to my code hosted on Github.

Disclaimer: Before taking this NLP course, the only thing I knew about Python was that ‘it’s the one without curly brackets’. I learned Python on the go while implementing these algorithms. So if I did anything against Python code conventions or flat-out heinous, I apologize and thank you in advance for your understanding. Feel free to write and let me know.

The Concept

To quote Wikipedia, “Named-entity recognition (I’ve always known it as tagging) is a subtask of information extraction that seeks to locate and classify elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quanitites, monetary values, percentages, etc.”

For example, the algorithm receives as input some text

“Bill Gates founded Microsoft in 1975.”

and outputs

“Bill Gates[person] founded Microsoft[organization] in 1975[date].”

Off the top of my head, some useful applications are document matching (ex. a document containing Gates[person] may not be on the same topic as one containing gates[object]) and query searches. I’m sure there are lots more, if you check out Collin’s Coursera course he may discuss this in greater depth.

The Requirements

Development data: The file ner_dev.dat provided by prof. Michael Collins has a series of sentences separated by an empty line, one word per line.

Training data: The file ner_train.dat provided by prof. Michael Collins has a series of sentences separated by an empty line, one word and tag per line, speparated by a space.

Word-tag count data: The file ner.counts has the format [count] [type of tag] [label] [word]. The tags used are RARE, O, I-MISC, I-PER, I-ORG, I-LOC, B-MISC, B-PER, B-ORG, B-LOC. The tag O means it’s not an NE. This file is generated by count_freqs.py, a script provided by prof. Michael Collins. Run count_freqs.py on the training data ner_train.dat

The Algorithm

Python code: viterbi.py Usage: python viterbi.py ner.counts ngram.counts [input_file] > [output_file] Summary: The Viterbi algorithm finds the maximum probability path for a series of observations, based on emission and transition probabilities. In a Markov Process, emission is the probability of an output given a state and transition is the probability of transitioning to the state given the previous states. In our case, the emission parameter e(x|y) = the probability of the word being x given you attributed tag y. If your training data had 100 counts of ‘person’ tags, one of which is the word ‘London’ (I know a guy who named his kid London), e(‘London’|’person’) = 0.01. Now with 50 counts of ‘location’ tags, 5 of which are ‘London’, e(‘London’|’location’) = 0.1 which clearly trumps 0.01. The transition parameter q(yi | yi-1, yi-2) = the probability of putting tag y in position i given it’s two previous tags. This is calculated by Count(trigram)/Count(bigram). For each word in the development data, he Viterbi algorithm will associate a score for a word-tag combo based on the emission and transition parameters it obtained from the training data. It does this for every possible tag and sees which is more likely. Clearly this won’t be 100% correct as natural language is unpredictable, but you should get pretty high accuracy.

Optional Preprocessing

Re-label words in training data with frequency < 5 as ‘RARE’ - This isn’t required, but useful. Re-run count_freqs.py if used.

Python code: label_rare.py

Usage: python label_rare.py [input_file]

Pseudocode:

  1. Uses Python Counter to obtain word counts in [input_file]; removes all word-count pairs with count < 5, store remaining pairs in a dictionary named rare_words.
  2. Iterates through each line in [input file], checks if word is in rare words dictionary, if so, replaces word with RARE.

Step 1. Get Count(y) and Count(x~y)

Python code: emission_counts.py

Pseudocode:

  1. Iterate through each line in ner.counts file and store each word-label-count combo in a dictionary count_xy and update the dictionary of count_y. For example count_xy[Peter][I-PER] returns the number of times the word ‘Peter’ was labeled ‘I-PER’ in the training data and count_y[I-PER] the total number of ‘I-PER’ tags. The dictionary count_y contains 8 items, one for each label ( RARE , O, I-MISC, I-PER, I-ORG, I-LOC, B-MISC, B-PER, B-ORG, B-LOC);
  2. Return count_xy, count_y

Step 2. Get bigram and trigram counts

Python code: transition_counts.py

Pseudocode:

  1. Iterate through each line in the n-gram_counts file
  2. If the line contains ’2-GRAM’ add an item to the bigram_counts dictionary using the bigram (two space-separated labels following the tag type ‘2-gram’) as key, count as value. This dictionary will contain Count(yi-2,yi-1).
  3. If the line contains ’3-GRAM’, add an item to the trigram_counts dictionary using the trigram as key, count as value. This dictionary will contain Count(yi-2, yi-1, yi).
  4. Return dictionaries of bigram and trigram counts.

Step 3. Viterbi

(For each line in the [input_file]):

  1. If the word was seen in training data (present in the count_xy dictionary), for each of the possible labels for the word:
  2. Calculate emission = count_xy[word][label] / float(count_y[label]
  3. Calculate transition = trigram_counts[trigram])/float(bigram_counts[bigram] Note: yi-2 = *, yi-1 = * for the first round
  4. Set probability = emission x transition
  5. Update max(probability) and arg max if needed. 2 If the word was not seen in the training data:
  6. Calculate emission = count xy[RARE][label] / float(count y[label].
  7. Calculate q(yi yi-1, yi-2) = trigram counts[trigram])/float(bigram counts[bigram]. Note: yi-2 = ∗, yi-1 = ∗ for the first round
  8. Set probability = emission × transition
  9. Update max(probability) if needed, arg max = RARE
  10. Write arg max and log(max(probability)) to output file.
  11. Update yi-2, yi-1.
  12. Update yi-2, yi-1.

Evaluation

Prof. Michael Collins provided an evaluation script eval_ne_tagger.py to verify the output of your Viterbi implementation. Usage: python eval_ne_tagger.py ner_dev.key [output_file]

Alessio Treglia: A WordPress Plugin to list posts in complex nested websites

10 January, 2016 - 16:06

 

List all posts by Authors, nested Categories and Titles is a WordPress Plugin I wrote to fix a menu issue I had during a complex website development. It has been included in the official WordPress Plugin repository. The Plugin is particularly suitable to all multi-nested categories and multi-authors websites handling a large number of posts and complex nested category layout (i.e.: academic papers, newpapers articles, etc). This plugin allows the user to place a shortcode into any page and get rid of a long and nested menu/submenu to show all site’s posts. A selector in the page will allow the reader to select grouping by Category/Author/Title. You can also manage to install a “tab” plugin (i.e.: Tabby Responsive Tabs) and arrange each group on its specific tab.

Output grouped by Category will look like:

CAT1
    post1                       AUTHOR
    SUBCAT1
        post2                   AUTHOR
        post3                   AUTHOR
        SUBCAT2
            post4               AUTHOR
            ...
            ...

while in the “Author” grouping mode, it is:

AUTHOR1
  post1               [CATEGORY]
  post2               [CATEGORY]

AUTHOR2
  post1               [CATEGORY]
  post2               [CATEGORY]
.....

The plugin installs a new menu “ACT List Shortcodes” in Admin->Tools. The tool is a helper to automatically generate the required shortcode. It will parse the options and display the string to be copied and pasted into any page.

The Plugin is holding a GPL2 license and it can be downloaded from its page on WP Plugins.

 

Scott Kitterman: Debian LTS Work December 2015

10 January, 2016 - 10:34

This was my eighth month as a Freexian sponsored LTS contributor. I was assigned 8 hours for the month of December.  It’s also the month in which I (re)learned an important lesson.

I decided to take another run at backporting the security fixes for Quassel.  Unlike the first time, I was successful at getting the fixes backported.  Then I ran into another problem: the changes took advantage of new features in c++11 such as std::function.

I made an attempt to change things away from c++11 with my limited c++ foo and after running head first into a brick wall several times finally consulted with the upstream author of the original fixes.   He let me know that while the problematic code is in fact present in the quassel versions in squeeze and wheezy, it’s not actually possible to trigger the security issue and that the CVEs should not actually apply to those versions.

That’s my report of a singularly unproductive and unpleasant 8 hours.  Next time I ask upstream first if there’s any doubt.  I shouldn’t assume they only care about current/recent releases.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้