Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 25 sec ago

John Goerzen: Computer Without a Case

13 November, 2014 - 06:37

My desk today looks like this:

Yep, that’s a computer. Motherboard to the right, floppy drives and CD drive stacked on top of the power supply, hard drive to the left.

And it’s an OLD computer. (I had forgotten just how loud these old power supplies are; wow.)

The point of this exercise is to read data off the floppies that I have made starting nearly 30 years ago now (wow). Many were made with DOS, some were made on a TRS-80 Color Computer II (aka CoCo 2). There are 5.25″ disks, 3.25″ disks, and all sorts of formats. Most are DOS, but the TRS-80 ones use a different physical format. Some of the data was written by Central Point Backup (from PC Tools), which squeezed more data on the disk by adding an extra sector or something, if my vague memory is working.

Reading these disks requires low-level playing with controller timing, and sometimes the original software to extract the data. It doesn’t necessarily work under Linux, and certainly doesn’t work with USB floppies or under emulation. Hence this system.

It’s a bridge. Old enough to run DOS, new enough to use an IDE drive. I can then hook up the IDE drive to a IDE-to-USB converter and copy the data off it onto my Linux system.

But this was tricky. I started the project a few years ago, but life got in the way. Getting back to it now, with the same motherboard and drive, but I just couldn’t get it to boot. I eventually began to suspect some disk geometry settings, and with some detective work from fdisk in Linux plus some research into old BIOS disk size limitations, discovered the problem was a 2GB limit. Through some educated trial and error, I programmed the BOIS with a number of cylinders that worked, set it to LBA mode, and finally my 3-year-old DOS 6.2 installation booted.

I had also forgotten how finicky things were back then. Pop a floppy from a Debian install set into the drive, type dir b:, and the system hangs. I guess there was a reason the reset button was prominent on the front of the computer back then…

Jonathan Wiltshire: A chilly week

13 November, 2014 - 06:03

It’s finally become properly autumnal, in the real world and in Debian. One week ago, I announced (on behalf of the whole release team) that Debian 8 “Jessie” had successfully frozen on time.

At 18:00 that evening we had 310 release critical bugs – that is, the number that we must reduce to 0 before the release is ready. How does that number look now?

Well, there are now 315 bugs affecting Jessie, at various stages of progression. That sounds like it’s going in the wrong direction, but considering that over a hundred new bugs were filed just 8 hours after the freeze announcement, things are actually looking pretty good.

Out of those 315 bugs, 91 have been fixed and the packages affected have already been unblocked by the release team. The fixed packages will migrate to Jessie in the next few days, if they continue to be bug-free.

Thirty-four bugs are apparently fixed in unstable but are not cleared for migration yet. That means that the release team has not spotted the fix, or nobody has told us, or the fixed package is unsuitable for some other reason (like unrelated changes in the same upload). You can help by trying to find out which reason applies, and talking to us about it. Most likely nobody has asked us to unblock it yet.

Speaking of unblocks, we currently have twenty-four requests that need to be looked at, and a further 20 which are awaiting more information from the maintainer. We already investigated and resolved 260 requests.

Our response rate is currently pretty good, but it’s unclear whether we can sustain it indefinitely. We all have day jobs, for example. One way you could help is to review the list of unchecked unblocks and gather up missing information, or look at the ones tagged moreinfo and see whether that’s still the case (maybe the maintainer replied, but forgot to remove the tag). If you’re confident, you might even try triaging some of the obvious requests and give some feedback to the maintainer, though the final decision will be made by a release team member.

After all, the quicker this goes the sooner we can release and thaw up unstable again.

Footnote: the method used to determine RC bug counts last week and this week differ, and therefore so could the margin for error. Surprisingly enough, counting bugs is not an exact science. I’m confident these numbers are close enough for broad comparison, even if they’re out by one or two.

A chilly week is a post from: jwiltshire.org.uk | Flattr

Joey Hess: continuing to be pleasantly surprised

13 November, 2014 - 03:33

Free software has been my career for a long time -- nothing else since 1999 -- and it continues to be a happy surprise each time I find a way to continue that streak.

The latest is that I'm being funded for a couple of years to work part-time on git-annex. The funding comes from the DataLad project, which was recently awarded a grant by the National Science Foundation. DataLad folks (at Dartmouth College and at Magdeburg University in Germany) are working on providing easy access to scientific data (particularly neuroimaging). So git-annex will actually be used for science!

I'm being funded for around 30 hours of work each month, to do general work on the git-annex core (not on the webapp or assistant). That includes bugfixes and some improvements that are wanted for DataLad, but are all themselves generally useful. (see issue list)

This is enough to get by on, at least in my current living situation. It would be great if I could find some funding for my other work time -- but it's also wonderful to have the flexibility to spend time on whatever other interesting projects I might want to.

Raphaël Hertzog: Freexian’s third report about Debian Long Term Support

12 November, 2014 - 23:56

Like last month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October 2014, we affected 13.75h works hours to 3 contributors:

  • Thorsten Alteholz
  • Raphaël Hertzog worked only 10 hours. The remaining hours will be done over November.
  • Holger Levsen did nothing (for unexpected personal reasons), he will catch up in November.

Obviously, only the hours done have been paid. Should the backlog grow further, we will seek for more paid contributors (to share the workload) and to make it easier to redispatch work hours once a contributor knows that he won’t be able to handle the hours that were affected to him/her.

Evolution of the situation

Compared to last month, we gained two new sponsors (Daevel and FOSSter, thanks to them!) and we have now 45.5 hours of paid LTS work to “spend” each month. That’s great but we are still far from our minimal goal of funding the equivalent of a half-time position.

In terms of security updates waiting to be handled, the situation is a bit worse than last month: while the dla-needed.txt file only lists 33 packages awaiting an update (6 less than last month), the list of open vulnerabilities in Squeeze shows about 60 affected packages in total. This differences has two explanations: CVE triaging for squeeze has not been done in the last days, and the POODLE issue(s) with SSLv3 affects a very large number of packages where it’s not always clear what the proper action is.

In any case, it’s never too late to join the growing list of sponsors and help us do a better job, please check with your company managers. If not possible for this year, consider including it in the budget for next year.

Thanks to our sponsors

Let me thank our main sponsors:

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Simon Josefsson: Dice Random Numbers

12 November, 2014 - 06:36

Generating data with entropy, or random number generation (RNG), is a well-known difficult problem. Many crypto algorithms and protocols assumes random data is available. There are many implementations out there, including /dev/random in the BSD and Linux kernels and API calls in crypto libraries such as GnuTLS or OpenSSL. How they work can be understood by reading the source code. The quality of the data depends on actual hardware and what entropy sources were available — the RNG implementation itself is deterministic, it merely convert data with supposed entropy from a set of data sources and then generate an output stream.

In some situations, like on virtualized environments or on small embedded systems, it is hard to find sources of sufficient quantity. Rarely are there any lower-bound estimates on how much entropy there is in the data you get. You can improve the RNG issue by using a separate hardware RNG, but there is deployment complexity in that, and from a theoretical point of view, the problem of trusting that you get good random data merely moved from one system to another. (There is more to say about hardware RNGs, I’ll save that for another day.)

For some purposes, the available solutions does not inspire enough confidence in me because of the high complexity. Complexity is often the enemy of security. In crypto discussions I have said, only half-jokingly, that about the only RNG process that I would trust is one that I can explain in simple words and implement myself with the help of pen and paper. Normally I use the example of rolling a normal six-sided dice (a D6) several times. I have been thinking about this process in more detail lately, and felt it was time to write it down, regardless of how silly it may seem.

A dice with six sides produces a random number between 1 and 6. It is relatively straight forward to intuitively convinced yourself that it is not clearly biased: inspect that it looks symmetric and do some trial rolls. By repeatedly rolling the dice, you can generate how much data you need, time permitting.

I do not understand enough thermodynamics physics to know how to estimate the amount of entropy of a physical process, so I need to resort to intuitive arguments. It would be easy to just assume that a dice produces 3 bits of entropy, because 2^3=6 which matches the number of possible outcomes. At least I find it easy to convince myself that 3 bits is the upper bound. I suspect that most dice have some form of defect, though, which leads to a very small bias that could be found with a large number of rolls. Thus I would propose that the amount of entropy of most D6’s are slightly below 3 bits on average. Further, to establish a lower bound, and intuitively, it seems easy to believe that if the entropy of particular D6 would be closer to 2 bits than to 3 bits, this would be noticeable fairly quickly by trial rolls. That assumes the dice does not have complex logic and machinery in it that would hide the patterns. With the tinfoil hat on, consider a dice with a power source and mechanics in it that allowed it to decide which number it would land on: it could generate seamingly-looking random pattern that still contained 0 bits of entropy. For example, suppose a D6 is built to produce the pattern 4, 1, 4, 2, 1, 3, 5, 6, 2, 3, 1, 3, 6, 3, 5, 6, 4, … this would mean it produces 0 bits of entropy (compare the numbers with the decimals of sqrt(2)). Other factors may also influence the amount of entropy in the output, consider if you roll the dice by just dropping straight down from 1cm/1inch above the table. With this discussion as background, and for simplicity, going forward, I will assume that my D6 produces 3 bits of entropy on every roll.

We need to figure out how many times we need to roll it. I usually find myself needing a 128-bit random number (16 bytes). Crypto algorithms and protocols typically use power-of-2 data sizes. 64 bits of entropy results in brute-force attacks requiring about 2^64 tests, and for many operations, this is feasible with today’s computing power. Performing 2^128 operations does not seem possible with today’s technology. To produce 128 bits of entropy using a D6 that produces 3 bits of entropy per roll, you need to perform ceil(128/3)=43 rolls.

We also need to design an algorithm to convert the D6 output into the resulting 128-bit random number. While it would be nice from a theoretical point of view to let each and every bit of the D6 output influence each and every bit of the 128-bit random number, this becomes difficult to do with pen and paper. For simplicity, my process will be to write the binary representation of the D6 output on paper in 3-bit chunks and then read it up as 8-bit chunks. After 8 rolls, there are 24 bits available, which can be read up as 3 distinct 8-bit numbers. So let’s do this for the D6 outputs of 3, 6, 1, 1, 2, 5, 4, 1:

3   6   1   1   2   5   4   1
011 111 001 001 010 101 010 001
01111100 10010101 01010001
124 0x7C 149 0x95 81 0x51

After 8 rolls, we have generated the 3 byte hex string “7C9551″. I repeat the process 5 more times, concatenating the strings, resulting in a hex string with 15 bytes of data. To get the last byte, I only need to roll the D6 three more times, where the two high bits of the last roll is used and the lowest bit is discarded. Let’s say the last D6 outputs were 4, 2, 3, this would result in:

4   2   3
100 010 011
10001001
137 0x89

So the 16 bytes of random data is “7C9551..89″ with “..” replaced by the 5 pieces of 3-byte chunks of data.

So what’s the next step? Depends on what you want to use the random data for. For some purposes, such as generating a high-quality 128-bit AES key, I would be done. The key is right there. To generate a high-quality ECC private key, you need to generate somewhat more randomness (matching the ECC curve size) and do a couple of EC operations. To generate a high-quality RSA private key, unfortunately you will need much more randomness, at the point where it makes more sense to implement a PRNG seeded with a strong 128-bit seed generated using this process. The latter approach is the general solution: generate 128 bits of data using the dice approach, and then seed a CSPRNG of your choice to get large number of data quickly. These steps are somewhat technical, and you lose the pen-and-paper properties, but code to implement these parts are easier to verify compared to verifying that you get good quality entropy out of your RNG implementation.

Richard Hartmann: One pot noodles

12 November, 2014 - 03:57

I had prepared a long and somewhat emotional blog post called "On unintended consequences" to write a rather sad bit of news off of my heart. While I believe the points raised were logical, courteous, and overall positive, I decided to do something different and replace sad things with happy things.

So anyway, for 3-4 people you will need:

  • The largest, widest cooking pot you can find (you want surface to let more water evaporate)
  • 500g noodles, preferably Bavette
  • 300g cherry tomatoes
  • ~150g sundried tomatoes
  • ~150g grilled peppers
  • a handful of olives
  • two medium-sized red onions
  • as much garlic as is socially acceptable in your group
  • one or two handful of fresh basil leaves
  • large gulp of olive oil
  • ~100g fresh-ground Parmesan
  • salt, to taste
  • random source of capsaicin, to taste
  • water

Proceed to the cooky part of the evening:

  • Slice and cut all vegetables into sizes of your preference; personally, I like to stay on the chunky side, but do whatever you feel like.
  • Pour the olive oil into the pot; optionally add oil from your sundried tomatoes and/or grilled peppers in case those came in oil.
  • Put the pot onto high heat and toss the chopped vegetables in as soon as it starts heating up.
  • Stir for maybe a minute, then add a bit of water.
  • Toss in the noodles and add just enough water to cover everything.
  • Now is a good time to add salt and capsaicin, to taste.
  • Cook everything down on medium to high heat while stirring and scraping the bottom of the pot so nothing burns. You want to get as much water out of the mix as possible.
  • Towards the end, maybe a minute before the noodles are al dente, wash the basil leaves and rip them into small pieces.
  • Turn off the heat, add all basil and cheese, stir a few times, and serve.

If you don't have any of those ingredients on hand and/or want to add something else: Just do so. This is not an exact science and it will taste wonderful any way you make it.

Mike Hommey: Building a Firefox Debian package

11 November, 2014 - 17:26

It’s actually been possible for some time, but I made that simpler recently, and I figured I should mention it.

  • Grab the iceweasel source
    $ apt-get source iceweasel
  • Install its build dependencies
    $ apt-get build-dep iceweasel
  • Build it
    $ cd iceweasel-*
    $ PRODUCT_NAME=firefox dpkg-buildpackage -rfakeroot

John Goerzen: I’m hiring a senior Linux sysadmin/architect

11 November, 2014 - 11:29

I’m never sure whether to post such things here, but I hope that it’s of interest to people: I’m trying to hire a top-notch Linux person for a 100% telecommute position. I’m particularly interested in people with experience managing 500 or more OS instances. It’s a shop with a lot of Debian, by the way. You can apply at that URL and mention you saw it in my blog if you’re interested.

Gustavo Noronha Silva: Yay, the left won! Or did it?

11 November, 2014 - 06:38

Originally published on politi.kov

I have been asked by a bunch of friends from outside of Brazil for my opinion regarding the recent elections we had in Brazil, and it is a bit complicated to explain it without some background, so I decided to write this piece providing a bit of history so that people can understand my opinion.

The elections this year were a rematch of our traditional polarization between the workers party (PT) and the social democracy party (PSDB), which has been going on since 1994. PT and PSDB used to be allies. In the 80s, when the dictatorship dropped the law that forbade more than 2 parties, the opposition party, MDB, began breaking up in several smaller ones.

PSDB was founded by politicians and intelectuals who were inspired by Europe’s social democracy and political systems. Parliamentarism, for instance, is one of the historical causes of the party. The workers party had a more grassroots origin, with union leaders, marxist intelectuals and marxist-inspired catholic priests being the main founders. They drew their inspiration from the USSR and Cuba, and were very close to social movements.

Lula (PT) and FHC (PSDB) campaigning together in 1981, by Clóvis Cranchi Sobrinho

Some people have celebrated the reelection of Dilma Roussef as a victory of the left against the right. In my opinion that view is wrong for several reasons. First, because I disagree that PSDB and Aécio Neves in particular are right-wing, both in terms of economics and social/moral issues. Second, because I believe Dilma’s first government has taken a quite severe turn to the right in several topics that matter a lot to me. Since comparisons with PSDB’s government during the 90s has been one of the main strategies of the campaign this year, I’ll argue why I think it was actually a pretty good government with a lot of left in it.

Unlike what happens in most other places, Brazil does not really have an actual right-wing party, economics-wise. Although we might see the birth of a couple in the near future, no current party is really against public health, education and social security being provided by the state as rights, or wants to decrease state size and lower taxes significantly. It should come as no surprise that even though it has undergone a lot of liberal reforms over the last 20 years, Brazil is still a very closed country, with very high import tariffs and a huge presence of the state in the economy. There is a certain consensus about all of that, with disagreements being essentially on implementation details, not goals.

On the other hand, and contrary to popular belief, when it comes to social and moral issues we are a very conservative people. Ironically, the two parties which have been in power over the last 20 years are quite progressive, being historically proponents of diversity, minorities rights, reproductive rights. They have had to compromise on those causes to become viable alternatives, given the conservative nature of the majority of the voters.

Despite their different origins and beliefs, both parties share socialist inclinations and were allies from the onset. That changed in 1992, when president Collor, who had been elected on a runoff against Lula (who PSDB supported), was impeached by Congress for corruption. With no formal political support and a chaotic situation in his hands, Itamar Franco, the vice president, called for a “national union” government to go through the last two years of his term. PSDB answered the call, but the workers party decided against being part of the government.

Fernando Henrique Cardoso, a sociologist who was one of the leaders of PSDB was chosen to lead the Foreign Relations Ministry, but a few months later got nominated to the Economy. At the time, Brazil lived under hyperinflation of close to 1000% a year, and several stabilization plans had been attempted. Economy Ministers did not last very much in office at the time. FHC gathered a team of economists and sponsored their stabilization plan, which turned out to be highly successful: the Plano Real (“Real Plan”). In addition to introducing a new currency, something that was becoming pretty common to Brazilians by then, it also attacked the structural causes of inflation.

Lula was counting on the failure of the Plano Real when he ran against FHC in 1994, but the plan succeeded, giving FHC two terms as president. During those two terms, FHC introduced several institutional changes that made Brazil a saner country. In addition to the hyperinflation, Brazil had lived a debt crises for decades and was still in default. FHC’s team renegotiated the debts, reopened lines of credit, but most importantly, introduced reforms that made the Brazilian finances and financial system credible.

The problem was not even that Brazil had a fiscal déficit, it just did not have any control whatsoever of money supply and budget. Banks, regardless of whether they were private or public, had very little regulation and took advantage of the hyperinflation to hide monstrous holes in their balances. When inflation was gone and regulation became more strict, those became apparent, and it was pretty clear that the system would collapse if nothing was done.

Some people like to say that FHC was a president who ruled for the rich and didn’t care about the poor. I think the way the potential collapse of the banking system was handled is a great counter-example of that. The government passed laws that made the owners of the banks responsible for the financial problems, regardless of whether caused by mismanagement or fraud. If a bank went under, the central bank intervened and added enough money to protect the deposits, but that money was a loan that had to be repaid by the owners of the bank, and the owners’ properties were added as collateral to the loan. As a brazilian journalist once said, the people did not risk losing their deposits, the bankers did risk losing the banks, though. Today, we have a separate fund, filled with money from the banks, that does what the central bank did back then when required.

Compare that to countries where the banking system was saved with tax payer money and executives kept getting huge bonuses regardless, while owners kept their profits. It is hard to find an initiative that is more focused on the public interest against the interest of the rich people who caused the problem. This legislation, called PROER, is still in place today, and it came along with solid regulation of the banking system. It should come as no surprise that Brazil went through the financial crisis of 2008 with not a single hiccup of the banking system and no fear of bank runs. Despite having been against PROER back in the day, Lula celebrated its existence in 2008, when it was clear it was one of the reasons we would not suffer much. He even advertised it as something that should be adopted by the US and Europe.

It is also pretty common to hear that under FHC social questions were not a priority. I believe it is pretty simple to see that that was not the case both by inspecting the growth of social spending and the improvement of social indicators for the period, such as UN’s human development index. One area in which people are particularly critical of the FHC government is the investment on higher education, and they are actually quite right. Brazil has free Federal universities and those did not get a lot of priority in the 90s. However, I would argue that while it is a matter of priorities, it is not one of education versus something else, but rather of what to invest on inside education. The reality is basic education was the priority.

When FHC came to power, Brazil had a significant number of children who were not going to school at all. The goal was to make access to schools universal for young children, and that goal was reached. Every child has been going to school since the early 2000s, and that is a significant achievement which reaches the poorest. While the federal universities are attended essentially by the Brazilian elite, given the difficulty of passing the exams and the relative lack of quality of free public schools compared to private ones, which is still a reality to this day, investment on getting children to even go to school for the early years has a significant impact on the lives of the poorest.

It is important to remember that getting every child to go to school is also what gave birth to one of the most celebrated programs from the Lula era: Bolsa Família (“Family Allowance”) is a direct money transfer to poor families, particularly those who have children and has been an important contribution to lowering inequality and getting people out of extreme poverty. To get the money, the families need to ensure their children are 1) attending school and 2) getting vaccinated.

That program comes from the FHC government, in which it was created with the name Bolsa Escola (“School Allowance”), in its turn inspired by a program of the same name by governor Cristovam Buarque, from PT. What Lula did, and he deserves a lot of credit for this, was to merge a series of smaller programs with Bolsa Escola, and then expand the program to ensure it got to more and more people. Interestingly, during the announcement of the program he credited the idea of doing that to a state governor from PSDB. You can see why I think these two should be allies again.

When faced with all these arguments, people will eventually say that FHC was bad because he privatized companies and used orthodox economic policies. Well, if that is what it takes, then we’ll have to take Lula down with him, because his first term was essentially a continuation of FHC’s second term: orthodox economic policies to keep inflation down, along with privatization of several state-owned companies and banks. But Lula, whom I voted for and whose government I believe was a good one, is not my subject: Dilma is.

On Lula’s second term, Dilma gained a lot of power when other major leaders of PT went down for corruption. She became second in command and started leading several programs. A big believer in developmentalism, she started pushing for a bigger role of the state in the coordination of the productive sector, with a clear focus on growing the industrial base.

One of the initiatives she sponsored was a sizable increase on the number and size of subsidized loans given out by the national development bank (BNDES). Brazil started an unnofficial “national champions” program, where the government elected a few big companies to get a huge amount of subsidized credit.

The goal was for these selected firms to get big enough to be competitive on the global market. The criteria for the choices is completely opaque, if it even exists, and includes handing out milions in subsidized credit for Eike Batista, who became Brazil’s richest enterpreneur for a while, and lost pretty much everything when it became clear the oil would not be pumping out of his camps after all, sinking with them a huge amount of public funds invested by BNDES.

The way this policy was enacted, it is unclear how much it really costs in terms of public funds: the Brazilian treasury emits debt to capitalize, lends that money to BNDES with higher than market interest, and BNDES then lends it out to the big companies with a lower than market interest rate. Although it is obviously unsustainable, the problem does not yet show in the balance because the grace period for BNDES debt with the treasury is 2040. The fact that this has a cost and, perhaps more importantly, a huge opportunity cost is not clear because it is not part of the government budget. Why are we putting money in this rather than quadrupling Bolsa Família, which studies show generates 1,78 reais in PIB for every 1 real invested? Worse, why are we not even updating Bolsa Família enough to cover inflation?

When Dilma got elected in 2010, the first signs were pretty bad. She was already seen as someone who did not care much for the environment, and on her first month in power she made good on that promise by pushing to get the Belo Monte Dam building started as soon as possible regardless of conditionalities being satisfied. To this day there are several issues with how the building of the dam is going: the handling of the indigenous people and the small city nearby are lacking, conditionalities are not met.

Beyond Belo Monte, indigenous leaders are being assassinated, deforestation in the Amazon forest has increased by 122% in 2014 alone. Dilma’s answer to people who question her on these kinds of issues is essentially: “would you rather not have electric power?”

Her populist authoritarian nature and obsession with industry are also pretty evident when it comes to her policies in the energy area as a whole. She showed up in national tv on the eve of our independence day celebration to announce a reduction in electric tariffs, mainly for industry, but also for homes. Nobody really knew how. The following week she sent a fast-track project to Congress to automatically renew concessions of power grid operators, requiring those who accepted it to lower tariffs, instead of doing an auction, which was already necessary anyway because the concessions were up on 2015. There was no discussion with stakeholders, there was just a populist announcement and a great deal of rhethoric to paint anyone who opposed as being against the people.

And now, everything went into the crapper because that represented a breach of contract that required indemnification, and we had a pretty bad drought that made power more expensive given the need to turn on the thermal generators. Combining the costs of the thermal generation, indemnity, and financial fallout that the grid operators suffered, we are already at 105 billion reais and counting, nobody knows how high the cost will reach. Any reduction in tariffs has long been invalidated. And the fact that industry has lowered production significantly ends up being good news, we would probably be under rationing already if that was not the case.

You would expect someone who fought a dictatorship to be pretty good in terms of human and civil rights. What we see in reality is a lack of respect for those things. During the world cup, Dilma has put the army on the streets and has supported arbitrary behaviour from state polices throughout the country. They jailed a bunch of demonstrators preemptively. No shit. The would be demonstrators were kept in jail throghout the tournament under false accusations. Dilma’s Minister of Justice said several times that the case against them was solid and that the arrests were legal, but it turned out the case simply did not exist. Just this week we had a number of executions orchestrated by policemen in the state of Pará and there is zero reaction from the federal government.

In the oil industry, Dilma has enacted a policy of subsidizing gas prices by using a fixed price that used to be lower than the international prices (it is no longer the case with the fall in international prices). That would not be a problem if Brazil was selfsufficient in oild and gas, which we are not: we had to import a significant amount of both. The implicit subsidy cost Petrobrás a huge amount of cash – the more gas it sold, the bigger the losses. This lead not only to decreasing the company’s market value (it is a state-controlled, but open company), but to reducing its capacity of investment as well.

That is more problematic than it sounds because, with our current concession model, every single oil camp needs to have Petrobrás as a member of the consortium. Limiting the company’s investment capacity limits the rate at which our pre-salt oil camps can be explored and thus the speed at which we can become selfsufficient. Chicken and egg anyone?

To make things worse, Dilma has made policies that lowered taxes on car production, used to foster economic activity during the crisis in 2008-2010, essentially permanent. This lead to a significant increase in traffic and polution on Brazilian cities, while at the same time increasing the pressure on Petrobrás, which had to import more and more gas. Meanwhile, Brazilian cities suffer from a severe lack of mobility infrastructure. A recent study has shown that Brazil has spend almost twice as much subsidized money on pro-car policies than on pro-mass transit projects. Talk about good usage of public funds.

One of the only remaining good news the government was still able to mention was the constant reduction in extreme poverty. Dilma was actually ellected promising to erradicate extreme poverty and changed the government’s slogan to “A rich country is a country with no poverty” (País rico é país sem pobreza). Well, it turns out all of these policies caused inequality and extreme poverty both to stop falling as of 2013. And given the policies were actually deepened in 2014, I believe it is very likely we’ll see an increase in both when we get the data for 2014, next year.

Other than that, her policies ended up being a complete failure. Despite giving tax benefits to several sectors, investment has fallen, growth has fallen and inflation is quite high at 6,6% for the last 12 months. In terms of minorities, her government has been a severe set back, with the government going back on educational material against homophoby saying it would not do “advertisement of sexual choice”, and going back on a decree that allowed the public health system to perform abortions on the cases allowed by the law (essentially if the woman has been raped).

Looking at Dilma’s policies, I really can’t see that much of the left, honestly. So why, you might ask, has this victory been deemed a victory of the left over the right? My explanation is the aura the workers party still manages to keep over itself. There’s a notion that whatever PT does, it will still be more to the left than PSDB, which I think is just crazy.

There is also a fair amount of idealizing Dilma just because she is Lula’s protegé. People will forgive anything, provided it is the workers party doing it. Thankfully, the number of people aligned on the left that supported the candidate from PSDB this election tells me this is changing quite rapidly. Hopefully that leads to PT having to reinvent itself, and get in touch with the left again.

Steinar H. Gunderson: Chess analysis

11 November, 2014 - 06:37

For those watching the World Chess Championship: I've put up an analysis site that runs during the games. (They start 12:00 GMT every day, except rest days. 15:00 local Sochi time.)

Rambling ahead:

Interest has been fabulous; what started with something like 30-40 viewers in the first game of last year, peaked at 650 during game 2 of this match. I have no idea where they come from, but seemingly word of mouth has created interest, and after I upgraded to a 20-core Haswell-EP, it's easily the place where you can get the strongest chess analysis (save for maybe Chessdom, if you pay for the premium option). Enough interest that I can't be lazy with the scaling anymore; glad I rewrote the JSON serving engine for an earlier tournament. (One Perl process per user didn't really work well anymore; now it's on Node.js because the event-based model fits that exact problem domain really well. I tried learning Go for it at first, but it really didn't work out well for me. Somehow the syntax is just too ugly, and the channels-of-channels stuff is too constrained a way for me to think.) I've received emails from Italy and Buenos Aires. Seen visitors from something like 15 different countries at a time. Probably that's massively undercounting. Chess is an Internet sport.

I had problems enough with bandwidth (choose between wasting 2–3 cores for Varnish gzipping the same data over and over again, or using 200 Mbit/sec of bandwidth) that I had to code in support for gzip in the backend, too. I dread the day when I have to support JSON diffing or something, but I think I've generally just made it small enough that we shouldn't see too big of a problem in game 3. I think we can sustain something like 10k users right now. Plan for scaling 10x, but not 100x. Check.

I struggle with Perl segfaults (I use Perl to control the two analysis engines, combine everything together and output the JSON that's the basis of what's being shown on the page). They always come at the worst possible time, but so rarely that it's impossible to reproduce. I've considered trying Perl from jessie instead of wheezy, but dist-upgrade in the middle of a match would be madness. Optionally I could try to switch to another language, but that would only give me a new set of problems to discover. (There's a surprising amount of code already.)

Rambling over. Good luck to both players!

Francesca Ciceri: The Trout Cabal

11 November, 2014 - 04:22

A rare shot of some members of the Trout Cabal doing their secret handshake, while wearing red noses to bring the fun back to Debian (as per their shadow DPL platform).

During the meeting, the members of the cabal were able to update their manifesto as well as devise new brilliant ways to promote Debian around the world. Many thanks to MiniDebconf UK 2014 organizers for hosting this important meeting. Also, thanks Nattie for the pic :).

It's not about how it inits, it's all about it ends. (Going out in style, you know?)

Neil Williams: On getting NEW packages into stable

11 November, 2014 - 04:16

There’s a lot of discussion / moaning /arguing at this time, so I thought I’d post something about how LAVA got into Debian Jessie, the work involved and the lessons I’ve learnt. Hopefully, it will help someone avoid the disappointment of having their package missing the migration into a future stable release. This was going to be a talk at the Minidebconf-uk in Cambridge but I decided to put this out as a permanent blog entry in the hope that it will be a useful reference for the future, not just Jessie.

Context

LAVA relies on a number of dependencies which were – at the time all this started – NEW to Debian as well as many others already in Debian. I’d been running LAVA using packages on my own system for a few months before the packages were ready for use on the main servers (I never actually installed LAVA using the old virtualenv method on my own systems, except in a VM). I did do quite a lot of this on my own but I also had a team supporting the effort and valuing the benefits of moving to a packaged system.

At the time, LAVA was based on Ubuntu (12.04 LTS Precise Pangolin) and a new Ubuntu LTS was close (Trusty Tahr 14.04) but I started work on this in 2013. By the time my packages were ready for general usage, it was winter 2013 and much too close to get anything into Ubuntu in time for Trusty. So I started a local repo using space provided by Linaro. At the same time, I started uploading the dependencies to Debian. json-schema-validator, django-testscenarios and others arrived in April and May 2014. (Trusty was released in April). LAVA arrived in NEW in May, being accepted into unstable at the end of June. LAVA arrived in testing for the first time in July 2014.

Upstream development continued apace and a regular monthly upload, with some hotfixes in between, continued until close to the freeze.

At this point, note that although upstream is a medium sized team, the Debian packaging also has a team but all the uploads were made by me. I planned ahead. I knew that I would be going to Macau for Linaro Connect in February – a critical stage in the finalisation of the packages and the migration of existing instances from the old methods. I knew that I would be on vacation from August through to the end of September 2014 – including at least two weeks with absolutely no connectivity of any kind.

Right at this time, Django1.7 arrived in experimental with the intent to go into unstable and hence into Jessie. This was a headache for me, I initially sought to delay the migration until after Jessie. However, we discussed it upstream, allocated time within the busy schedule and also sought help from within Debian with the RFH tag. Raphaël Hertzog contributed patches for django1.7 support and we worked on those patches upstream, once I was back from vacation. (The final week of my vacation was a work conference, so we had everyone together at one hacking table.)

Still there was more to do, the django1.7 patches allowed the unit tests to complete but broke other parts of the lava-server package and needed subsequent tweaks and fixes.

Even with all this, the auto-removal from testing for packages affected by RC bugs in their dependencies became very important to monitor (it still is). It would be useful if some packages had less complex dependency chains (I’m looking at you, uwsgi) as the auto-removal also covers build-depends. This led to some more headaches with libmatheval. I’m not good with functional programming languages, I did have some exposure to Scheme when working on Gnucash upstream but it wasn’t pleasant. The thought of fixing a scheme problem in the test suite of libmatheval was daunting. Again though, asking for help, I found people in the upstream team who wanted to refresh their use of scheme and were able to help out. The fix migrated into testing in October.

Just for added complications, lava-server gained a few RC bugs of it’s own during that time too – fixed upstream but awkward nonetheless.

Achievement unlocked

So that’s how a complex package like lava-server gets into stable. With a lot of help. The main problem with top-level packages like this is the sheer weight of the dependency chain. Something seemingly unrelated (like libmatheval) can seriously derail the migrations. The package doesn’t use the matheval support provided by uwsgi. The bug in matheval wasn’t in the parts of matheval used by uwsgi. It wasn’t in a language I am at all comfortable in fixing – but it’s my name on the changelog of the NMU. That happened because I asked for help. OK, when django1.7 was scheduled to arrive in Debian unstable and I knew that lava was not ready, I reacted out of fear and anxiety. However, I sought help, help was provided and that help was enough to get upstream to a point where the side-effects of the required changes could be fixed.

Maintaining a top-level package in Debian is becoming more like maintaining a core package in Debian and that is a good thing. When your package has a lot of dependencies, those dependencies become part of the maintenance workload of your package. It doesn’t matter if those are install time dependencies, build dependencies or reverse dependencies. It doesn’t actually matter if the issues in those packages are in languages you would personally wish to be expunged from the archive. It becomes your problem but not yours alone.

Debian has a lot of flames right now and Enrico encouraged us to look at what else is actually happening in Debian besides those arguments. Well, on top of all this with lava, I also did what I could to help the arm64 port along and I’m very happy that this has been accepted into Jessie as an official release architecture. That’s a much bigger story than LAVA – yet LAVA was and remains instrumental in how arm64 gained the support in the kernel and various upstreams which allowed patches to be accepted and fixes to be incorporated into Debian packages.

So a roll call of helpers who may otherwise not have been recognised via changelogs, in no particular order:

  • Steve McIntyre (Debian & Linaro)
  • Raphaël Hertzog (Debian)
  • Dave Pigott (Linaro)
  • Rémi Duraffort (Linaro)
  • Sjoerd Simons (Debian)
  • Antonio Terceiro (Debian and formerly Linaro)
  • Martin Pitt (Debian)
  • Jordi Mallach (Debian)
  • Hector Oron (Debian)
  • Colin Watson (Debian)

Also general thanks to the Debian FTP and Release teams.

Lessons learnt
  1. Allow time! None of the deadlines or timings involved in this entire process were hidden or unexpected. NEW always takes a finite but fairly lengthy amount of time but that was the only timeframe with any amount of uncertainty. That is actually a benefit – it reminds you that this entire process is going to take a significant amount of time and the only loser if you try to rush it is going to be you and your package. Plan for the time – and be sceptical about how much time is actually required.
  2. Ask for help! Everyone in Debian is a volunteer. Yes, the upstream for this project is a team of developers paid to work on this code (and largely only this code) but the upstream also has priorities, requirements, objectives and deadlines. It’s no good expecting upstream to do everything. It’s no good leaving upstream insufficient time to fit the required work into the existing upstream schedules. So ask for help within upstream and within Debian – ask for help wherever you can. You don’t know who may be able to help you until you ask. Be clear when asking for help – how would someone test their proposed fix? Exactly what are you asking for help doing? (Hint: “everything” is not a good answer.)
  3. Keep on top of announcements and changes. The release team in Debian have made the timetable strict and have published regular updates, guidelines and status notes. As maintainer, it is your responsibility to keep up with those changes and make others in the upstream team aware of the changes and the implications. Upstream will rely on you to provide accurate information about these requirements. This is almost more important than actually providing the uploads or fixes. Without keeping people informed, even asking for help can turn out to be counter-productive. Communicate within Debian too – talk to the teams, send status updates to bugs (even if the status is tag 123456 + help).
  4. Be realistic! Life happens around us, things change, personal timetables get torn up. Time for voluntary activity can appear and disappear (it tends to disappear far more often than extends, so take that into account too).
  5. Do not expect others to do the work for you – asking for help is one thing, leaving the work to others is quite another. No complaining to the release team that they are “blocking” your work and avoid pleading or arguing when a decision is made. The policies and procedures within Debian are generally clear and there are quite enough arguments without adding more. Read the policies, read the guidelines, watch how other packages and other maintainers are handled and avoid those mistakes. Make it easy for others to help deliver what you want.
  6. Get to know your dependency chain – follow the links on the packages.debian.org pages and get a handle on which packages are relevant to your package. Subscribe to the bug pages for some of the more “high-risk” packages. There are tools to help. rc-alert can help you spot problems with runtime dependencies (you do have your own package installed on a system running unstable – if not, get that running NOW). Watching build-dependencies is more difficult, especially build-dependencies of a runtime dependency, so watch the RC bug lists for packages in your dependency chain.

Above all else, remember why you and upstream want the packages in Debian in the first place. Debian is a respected distribution and has an acknowledged reputation for stability and portability. The very qualities that you and your upstream desire from having your package in Debian have direct implications for the amount of work and the amount of time that will be required to get your packages into Debian and keep them there. Having your package in Debian will bring considerable benefits but you will be required to invest a considerable amount of time. It is this contribution which is valuable to Debian and it is this work which will deliver the benefits you seek.

Being an expert in the one package is wildly inadequate. Debian is about the system, the whole distribution and sooner or later, you – as the maintainer – will be absolutely required to handle something which is so far out of your comfort zone it’s untrue. The reality is that you are not expected to fix that problem – you are expected to handle that problem and that includes seeking and acknowledging the help of others.

The story isn’t over until release day. Having your package in testing the day before the freeze is one step. It may be a large step, but it is only one. The status of that package still needs monitoring. That long dependency chain can still come back and bite.

Don’t wait for problems to surprise you.

Finally

One thing I do ask is that other upstream teams and maintainers think about the dependency chain they are creating. It may sound nice to have bindings for every interpreted language possible when releasing your compiled library but it does not help people using that library. Yes, it is more work releasing the bindings separately because a stable API is going to be needed to allow version 1.2.3 to work alongside 1.2.2 and 1.3.0 or the entire effort is pointless. Consider how your upstream package migrates. Consider how adding yet another build-dependency for an optional component makes things exponentially harder for those who need to rely on that upstream. If it is truly optional, release it separately and keep backwards compatibility on each side. It is more work but in reality, all that is happening is that the work is being transferred from the distribution (where it helps only that one distribution and causes duplication into other distributions) into the upstream (where it helps all distributions). Think carefully about what constitutes core functionality and release the rest separately.

Combining bindings for php, ruby, python, java, lua and xslt into a single upstream release tarball is a complete nonsense. It simply means that the package gets blocked from new uploads by the constant churn of being involved in every transition that occurs in the distribution. There is a very real risk that the package will miss a stable release simply by having fingers in too many pies. That hurts not only this upstream but every upstream trying to use any part of your code. Every developer likes to think that people are using and benefiting from their effort. It’s not nice to directly harm the interests of other developers trying to use your code. It is not enough for the binary packages to be discrete – migrations happen by source package and the released tarball needs to not include the optional bindings. It must be this way because it is the source package which determines whether version 1.2.3 of plugin foo can work with version 1.2.0 of the library as well as with version 1.3.0.

Maintainers regularly deal with these issues – so talk to your upstream teams and explain why this is important to that particular team. Help other maintainers use your code and help make it easier to make a stable release of Debian. The quicker the freeze & release process becomes, the quicker new upstream versions can be uploaded and backported.

Lars Wirzenius: A vision of backups in Debian

11 November, 2014 - 00:34

Meet Alfred. Alfred is a Debian user. He has a laptop with Debian and a desktop environment running on it. Alfred does a lot of impotant things on his computer: his hobby is to photograph his cat, and also he works for a non-governmental organisation that investigates and reports on human rights violations. His job involves a lot of travel to many parts of the world, and he needs to handle a lot of very sensitive information. His laptop uses full-disk encryption, and it's generally speaking very well secured against the various security threats that are due to his job.

He is worried about losing important data. He's not too worried that the sensitive information he has will leak if his laptop is stolen, but it might be impossible to re-create the data if the laptop is gone. If he interviews a whistleblower for a slave-trading corporation, and his laptop is stolen after that, it might be impossible to ever meet with the whistleblower again.

Alfred wants backups of his data. He gets a USB thumb drive, and plugs it in. The laptop has never seen the drive before, so it asks Alfred if the drive should be used for backups. Alfred says yes.

The laptop formats the thumb drive, again with full-disk encryption, and then runs a backup. The backup automatically picks up all the files from Alfred's home directory, and some system confguration files that may be necessary as well. (Read: /home and /etc.) Files that are usually not very precious, such as web browser caches, are automatically excluded.

Later, when Alfred wants to update the backup, he plugs in the same drive again. The system recognises the drive, and runs the backup. While the backup is running, Alfred has an indicator in his desktop status bar. If Alfred leaves the drive plugged in, and changes anything in his home directory, that gets immediately backed up to the backup drive. Until the changes have been backed up, the indicator stays on Alfred's status bar.

This isn't good enough, however. Alfred needs to carry the USB drive with him, and if he's mugged, he might lose both the laptop and the backup drive. Therefore, the system administrator at Alfred's NGO, Janet, sets up an account on an online backup server, and e-mails Alfred a configuration file, which Alfred drops into the backup system's configuration tool.

From then on, whenever Alfred's laptop is online, and can see the backup server (identified by an SSH host key), any changes Alfred makes are backed up as soon as possible. For the next interview, as soon as the interview is finished and Alfred closes the laptop lid to suspend it, the backup has already finished, both to the online server and the USB thumb drive.

Alfred is now happy, and no longer fears for the safety of his data.

Janet, however, is still a little worried, because the online backup server is an attractive target for attacks. She asks Alfred to configure the backup service on the laptop to encrypt and digitally sign the backups, and sends the master backup public key with the request. Janet keeps the corresponding private key in a secure location.

Alfred goes into the configuration dialog, ticks the right box, and drops in the server public key. The backup software generates a new public key for the laptop to use for encrypting the backups, and Alfred e-mails that to Janet, using PGP encrypted and signed e-mail. He also puts the laptop backup encryption keys on a couple of USB thumb drives, which he stores in safe places (in his sock drawer and coffee jar, but don't tell anyone that).

Alfred's online backups are now encrypted with public keys so that both Alfred and Janet can decrypt them, but only they can do that. The backups are digitally signed so that if the server is hacked, the backups can't be altered without it being detectable.

Some time passes.

Alfred needs to go to speak to the general assembly of the Cat Conference, about how awesome his cat is. This requires him to travel to the US, and he's worried that the US authorities will confiscate his laptop and try to get at his work files that way. He deletes all his work files, ssh keys, and other files that aren't necessary to show his cat pictures at the conference.

The conference goes fine, and when Alfred comes back home, he gets the USB thumb drive that contains his backup encryption key. He plugs it in, tells the backup configuration software to import it. Alfred can then open his backups on the online backup server in his file browser, and can restore back his files by copying them with drag and drop.

However, the next day Alfred's cat, upset at how much he travels, pees on the laptop. It is ruined. Everything is lost.

Alfred gets a new laptop from Janet, and installs Debian on it. During installation, Alfred gives the installer the USB backup drive, and the installer restores all of Alfred's own files, and also restores system configuration. After a little while, Alfred has a newly installed laptop with all his usual software and all of his files.

This is a summary of a vision for backups being a service in a default Debian install in the future. It is currently just a vision, and nobody is currently working on making it reality. Would you like to work on this for the release after jessie?

(No cats were harmed in the production of this vision.)

Daniel Leidert: Removal of debian.wgdd.de and {cvs,svn,vcs}.wgdd.de

10 November, 2014 - 22:57

If you've recently tried to browse to or apt-get from either cvs.wgdd.de, svn.wgdd.de, vcs.wgdd.de, debian.wgdd.de or ubuntu.wgdd.deyou've probably seen (and still are) an error (410, Gone) coming up and I'd like to give a short explanation why.

{cvs,svn,vcs}.wgdd.de

I've left my server provider and shut down the above services and only keep a small amount of services running. The domains {cvs,svn,vcs}.wgdd.de were used to provide (a) a subversion (SVN) server (via HTTPS and dav_svn) for some public and private work and (b) a CVS web-client to some old project works in CVS.

Among the latter was e.g. old code to generate manual pages for the proprietary fglrx graphics driver, stuff that laid there untouched for many years. So I guess, it was about time to finally remove it :)

The subversion web-client gave public access to some packaging work I do for the Debian GNU/Linux distribution, e.g. for the cvsweb, gtypist packages and some non-official packaging work. For the official packages I plan to move the files into the collab-maint web space and adjust the packages control files accordingly. Everything else will be hosted non-publicly in the future. I still intend to move stuff, that turns out to be useful for more people, to public places like github and Co.

debian.wgdd.de

I used this site to describe my usage of Debian GNU/Linux on the hardware I own ... laptop, servers etc. I wrote a few HOWTOs and provided a link collection with useful links. You can still find all of this using the archive.org service. I also had a repository up and working, especially to provide bluefish packages for users of Debian stable and Ubuntu. Half a year ago I dropped the Ubuntu build environments and packages and moved the Debian stable backports to official places. This effectively emptied the repository and left only the wgdd-archive-keyring package in place. So, there is no real need for a public repository anymore and the linklist probably got outdated too. All in all, I decided to stop this service (maybe I'll forward the site to here later :)).

If you see an error regarding the debian.wgdd.de URL running apt-get or aptitude, then there is a reference to this site in /etc/apt/sources.list or /etc/apt/sources.list.d/*, which can be safely removed. Further you should get rid of the wgdd-archive-keyring package:

apt-get autoremove --purge wgdd-archive-keyring

... or the repository key:

apt-key del E394D996
What else

In case you need any content from the mentioned services, just let me know.

Petter Reinholdtsen: A Debian package for SMTP via Tor (aka SMTorP) using exim4

10 November, 2014 - 19:40

The right to communicate with your friends and family in private, without anyone snooping, is a right every citicen have in a liberal democracy. But this right is under serious attack these days.

A while back it occurred to me that one way to make the dragnet surveillance conducted by NSA, GCHQ, FRA and others (and confirmed by the whisleblower Snowden) more expensive for Internet email, is to deliver all email using SMTP via Tor. Such SMTP option would be a nice addition to the FreedomBox project if we could send email between FreedomBox machines without leaking metadata about the emails to the people peeking on the wire. I proposed this on the FreedomBox project mailing list in October and got a lot of useful feedback and suggestions. It also became obvious to me that this was not a novel idea, as the same idea was tested and documented by Johannes Berg as early as 2006, and both the Mailpile and the Cables systems propose a similar method / protocol to pass emails between users.

To implement such system one need to set up a Tor hidden service providing the SMTP protocol on port 25, and use email addresses looking like username@hidden-service-name.onion. With such addresses the connections to port 25 on hidden-service-name.onion using Tor will go to the correct SMTP server. To do this, one need to configure the Tor daemon to provide the hidden service and the mail server to accept emails for this .onion domain. To learn more about Exim configuration in Debian and test the design provided by Johannes Berg in his FAQ, I set out yesterday to create a Debian package for making it trivial to set up such SMTP over Tor service based on Debian. Getting it to work were fairly easy, and the source code for the Debian package is available from github. I plan to move it into Debian if further testing prove this to be a useful approach.

If you want to test this, set up a blank Debian machine without any mail system installed (or run apt-get purge exim4-config to get rid of exim4). Install tor, clone the git repository mentioned above, build the deb and install it on the machine. Next, run /usr/lib/exim4-smtorp/setup-exim-hidden-service and follow the instructions to get the service up and running. Restart tor and exim when it is done, and test mail delivery using swaks like this:

torsocks swaks --server dutlqrrmjhtfa3vp.onion \
  --to fbx@dutlqrrmjhtfa3vp.onion

This will test the SMTP delivery using tor. Replace the email address with your own address to test your server. :)

The setup procedure is still to complex, and I hope it can be made easier and more automatic. Especially the tor setup need more work. Also, the package include a tor-smtp tool written in C, but its task should probably be rewritten in some script language to make the deb architecture independent. It would probably also make the code easier to review. The tor-smtp tool currently need to listen on a socket for exim to talk to it and is started using xinetd. It would be better if no daemon and no socket is needed. I suspect it is possible to get exim to run a command line tool for delivery instead of talking to a socket, and hope to figure out how in a future version of this system.

Until I wipe my test machine, I can be reached using the fbx@dutlqrrmjhtfa3vp.onion mail address, deliverable over SMTorP. :)

Matthias Klumpp: Introducing Limba – a software installer experiment

10 November, 2014 - 19:06

As some of you already know, since the larger restructuring in PackageKit for the 1.0 release, I am rethinking Listaller, the 3rd-party application installer for Linux systems, as well.

During the past weeks, I was playing around with a lot of different ideas and code, to make installations of 3rd-party software easily possible on Linux, but also working together with the distribution package manager. I now have come up with an experimental project, which might achieve this.

Motivation

Many of you know Lennart’s famous blogpost on how we put together Linux distributions. And he makes a lot of good and valid points there (in fact, I agree with his reasoning there). The proposed solution, however, is not something which I am very excited about, at least not for the use-case of installing a simple application[1]. Leaving things like the exclusive dependency on technology like Btrfs aside, the solution outlined by Lennart basically bypasses the distribution itself, instead of working together with it. This results in a duplication of installed libraries, making it harder to overview which versions of which software component are actually running on the system. There is also a risk for security holes due to libraries not being updated. The security issues are worked around by a superior sandbox, which still needs to be implemented (but will definitively come soon, maybe next year).

I wanted to explore a different approach of managing 3rd-party applications on Linux systems, which allows sharing as much code as possible between applications.

Limba – Glick2 and Listaller concepts merged

In order to allow easy creation of software packages, as well as the ability to share software between different 3rd-party applications, I took heavy inspiration from Alexander Larssons Glick2 project, combining it with ideas from the application-directory based Listaller.

The result is Limba (named after Limba tree, not the voodoo spirit – I needed some name starting with “li” to keep the prefix used in Listaller, and for a tool like this the name didn’t really matter ).

Limba uses OverlayFS to combine an application with its dependencies before running it, as well as mount namespaces and shared subtrees. Except for OverlayFS, which just landed in the kernel recently, all other kernel features needed by Limba are available for years now (and many distributions ship with OverlayFS on older kernels as well).

How does it work?

In order to to achieve separation of software, each software component is located in a separate container (= package). A software component can be an application, like Kate or GEdit, but also be a single shared library (openssl) or even a full runtime (KDE Frameworks 5 parts, GNOME 3).

Each of these software components can be identified via AppStream metadata, which is just a few bits of XML. A Limba package can declare a dependency on any other software component. In case that software is available in the distribution’s repositories, the version found there can be used. Otherwise, another Limba package providing the software is required.

Limba packages can be provided from software repositories (e.g. provided by the distributor), or be nested in other packages. For example, imagine the software “Kate” requires a version of the Qt5 libraries, >= 5.2. The downloadable package for “Kate” can be bundled with that dependency, by including the “Qt5 5.2″ Limba package in the “Kate” package. In case another software is installed later, which also requires the same version of Qt, the already installed version will be used.

Since the software components are located in separate directories under /opt/software, an application will not automatically find its dependencies, or be able to locate its own files. Therefore, each application has to be run by a special tool, which merges the directory trees of the main application and it’s dependencies together using OverlayFS. This has the nice sideeffect that the main application could override files from its dependencies, if necessary. The tool also sets up a new mount namespace, so if the application is compiled with a certain prefix, it does not need to be relocatable to find its data files.

At installation time, to achieve better system integration, certain files (like e.g. the .desktop file) are split out of the installed directory tree, so the newly installed application achieves almost full system integration.

AQNAY* Can I use Limba now?

Limba is an experiment. I like it very much, but it might happen that I find some issues with it and kill it off again. So, if you feel adventurous, you can compile the source code and use the example “Foobar” application to play around with Limba. Before it can be used in production (if at all), some more time is needed.

I will publish documentation on how to test the project soon.

Doesn’t OverlayFS have a maximum stacking depth?

Oh yes it has! The “How does it work” explanation doesn’t tell the whole truth in that regard (mainly to keep the section small). In fact, Limba will generate a “runtime” for the newly installed software, which is a directory with links to the actual individual software components the runtime consists of. The runtime is identified by an UUID. This runtime is then mounted together with the respective applications using OverlayFS. This works pretty great, and also results in no dependency-resolution to be done immediately before an application is started.

Than dependency stuff gives me a headache…

Admittedly, allowing dependencies adds a whole lot of complexity. Other approaches, like the one outlined by Lennart work around that (and there are good reasons for doing that as well).

In my opinion, the dependency-sharing and de-duplication of software components, as well as the ability to use the components which are packaged by your Linux distribution is worth the extra effort.

Can you give an overview of future plans for Limba?

Sure, so here is the stuff which currently works:

  • Creating simple packages
  • Installing packages
  • Very basic dependency resolution (no relations (like >, <, =) are respected yet)
  • Running applications
  • Initial bits of system integration (.desktop files are registered)

These features are planned for the new future:

  • Support for removing software
  • Automatic software updates of 3rd-party software
  • Atomic updates
  • Better system integration
  • Integration with the new sandboxing features
  • GPG signing of packages
  • More documentation / bugfixes

Remember that Limba is an experiment, still

XKCD 927

Technically, I am replacing one solution with another one here, so the situation does not change at all ;-). But indeed, some duplicate work is done due to more people working in this area now on similar questions.

But I think this is a good thing, because the solutions worked on are fundamentally different approaches, and by exploring multiple ways of doing things, we will come up with something great in the end. (XKCD reference)

Doesn’t the use of OverlayFS have an impact on the performance of software running with Limba?

I ran some synthetic benchmarks and didn’t notice any problems – even the startup speed of Limba applications is only a few milliseconds slower than the startup of the “raw” native application. However, I will still have to run further tests to give a definitive answer on this.

How do you solve ABI compatibility issues?

This approach requires software to keep their ABI stable. But since software can have strict dependencies on a specific version of a software (although I’d discourage that), even people who are worried about this issue can be happy. We are getting much better at tracking unwanted ABI breaks, and larger projects offer stable API/ABI during a major release cycle. For smaller dependencies, there are, as explained above, stricter dependencies.

In summary, I don’t think ABI incompatibilities will be a problem with this approach – at least not more than they have been in general. (The libuild facilities from Listaller to minimize dependencies will still be present im Limba, of course)

You are wrong because of $X!

Please leave a comment in this case! I’d love to discuss new ideas and find the limits of the Limba concept – that’s why I am writing C code afterall, since what looks great on paper might not work in reality or have issues one hasn’t thought about before. So any input is welcomed!

Conclusion

Last but not least I want to thank Alexander Larsson for writing Glick2, which Limba is heavily inspired from, and for his patient replies to my emails.

If Limba turns out to be a good idea, you can expect a few more blog posts about it soon.

* Answered questions nobody asked yet

[1]: Don’t get me wrong, I would like to have these ideas implemented – they offer great value. But I think for “simple” software deployment, the solution is an overkill.

Dirk Eddelbuettel: BH release 1.54.0-5

10 November, 2014 - 18:52

A new release of BH, our package providing Boost headers for use by R is now on CRAN. This release was triggered via a request by Ben Goodrich as noted below who asked for Boost Circular Buffer which RStan uses.

No other changes were made.

Changes in version 1.54.0-5 (2014-11-09)
  • Added Boost Circular Buffer requested by Ben Goodrich for RStan

Courtesy of CRANberries, there is also a diffstat report for the most recent release

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHubGitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Konstantinos Margaritis: New owner/maintainer for CSVChart Drupal module

10 November, 2014 - 15:59

Back in ~2008, I had created a small Drupal module that used CustomFilter and Google Chart Tools: Image Charts, called CSV Chart. It was a simple module that took embedded CSV data and presented them as Google charts. It was very handy for what I wanted -and still want- to do, present benchmarks for my work on Altivec (and now in general SIMD).

However, I don't really code in PHP anymore -haven't coded PHP since 2010- so the module was left to bitrot. I did some minor adjustment to run with the current D7-based site, but that was it.

Thankfully, others found the module useful to use and adopt and the result has been that in the last days I transfered ownership of the CSV Chart to Pierre Vriens, so that he can continue development and maintenance of this tiny module. I would like to publicly thank Pierre for his work!

And that's the beauty of Free Software!

John Goerzen: Debian – A plea to worry about what matters, and not take ourselves too seriously

10 November, 2014 - 08:11

I posted this on debian-devel today. I am also posting it here, because I believe it is important to more than just Debian developers.

Good afternoon,

This message comes on the heels of Sam Hartman’s wonderful plea for compassion [1] and the sad news of Joey Hess’s resignation from Debian [2].

I no longer frequently post to this list, but when you’ve been a Debian developer for 18 years, and still care deeply about the community and the project, perhaps you have a bit of perspective to share.

Let me start with this:

Debian is not a Free Software project.

Debian is a making-the-world-better project, a caring for people project, a freedom-spreading project. Free Software is our tool.

As many of you, hopefully all of you, I joined Debian because I enjoyed working on this project. We all did, didn’t we? We joined Debian because it was fun, because we were passionate about it, because we wanted to make the world a better place and have fun doing it.

In short, Debian is life-giving, both to its developers and its users.

As volunteers, it is healthy to step back every so often, and ask ourselves two questions: 1) Is this activity still life-giving for me? 2) Is it life-giving for others?

I have my opinions about init. Strong ones, in fact. [3] They’re not terribly relevant to this post. Because I can see that they are not really all that relevant.

14 years ago, I proposed what was, until now anyhow, one of the most controversial GRs in Debian history. It didn’t go the way I hoped. I cared about it deeply then, and still care about the principles.

I had two choices: I could be angry and let that process ruin my enjoyment of Debian. Or I could let it pass, and continue to have fun working on a project that I love. I am glad I chose the latter.

Remember, for today, one way or another, jessie will still boot.

18 years ago when I joined Debian, our major concerns were helping newbies figure out how to compile their kernels, finding manuals for monitors so we could set the X modelines properly, finding some sort of Free web browser, finding some acceptable Office-type software.

Wow. We WON, didn’t we? Not just Debian, but everyone. Freedom won.

I promise you – 18 years from now, it will not matter what init Debian chose in 2014. It will probably barely matter in 3 years. This is not key to our goals of making the world a better place. Jessie will still boot. I say that even though my system runs out of memory every few days because systemd-logind has a mysterious bug [4]. It will be fixed. I say that even though I don’t know what init system it will use, or how much choice there will be. I say that because it is simply true. We are Debian. We will make it work, one way or another.

I don’t post much on this list anymore because my personal passion isn’t with posting on this list anymore. I make liberal use of my Delete Thread keybinding on -vote these days, because although I care about the GR, I don’t care about it enough to read all the messages about it. I have not yet decided if I will spend the time researching it in order to vote. Instead of debating the init GR, sometimes I sit on the sofa with my wife. Sometimes I go out and fly the remote-control airplane I’m learning to fly. Sometimes I repair my plane after a flight that was shorter than planned. Sometimes I play games with my boys, or help them with homework, or share my 8-year-old’s delight as a text file full of facts about the Titanic that he wrote in Emacs comes spitting out of the printer. Sometimes I write code or play with the latest Linux filesystems or build a new server for my basement.

All these things matter more to me than init. I have been using Debian at home for almost 20 years, at various workplaces for almost that long, and it is not going to stop being a part of my life any time soon. Perhaps I will have to learn how to administer a new init system. Well, so be it; I enjoy learning new things. Or perhaps I will have to learn to live with some desktop limitations with an old init system. Well, so be it; it won’t bother me much anyhow. Either way, I’m still going to be using what is, to me, the best operating system in the world, made by one of the world’s foremost Freedom projects.

My hope is that all of you may also have the sense of peace I do, that you may have your strong convictions, but may put them all in perspective. That we as a project realize that the enemy isn’t the lovers of the other init, but the people that would use law and technology to repress people all over the world. We are but one shining beacon on a hill, but the world will be worse off if our beacon winked out.

My plea is that we each may get angry at what matters, and let go of the smaller frustrations in life; that we may each find something more important than init/systemd to derive enjoyment and meaning from. [5] May you each find that airplane to soar freely in the skies, to lift your soul so that the joy of using Free Software to make the world a better place may still be here, regardless of what /sbin/init is.

[1] https://lists.debian.org/debian-project/2014/11/msg00002.html

[2] https://lists.debian.org/debian-devel/2014/11/msg00174.html

[3] A hint might be that in my more grumpy moments, I realize I haven’t ever quite figured out why the heck this dbus thing is on so many of my systems, or why I have to edit XML to configure it… ;-)

[4] #765870

[5] No disrespect meant to the init/systemd maintainers. Keep enjoying what you do, too!

Junichi Uekawa: After upgrading psgml complains.

10 November, 2014 - 05:32
After upgrading psgml complains. Hangs forever starting up on psgml-html mode startup, and after breaking, 'tab' would error. Seems like trying to load HTML 4.01 transitional DTD makes this happen. I switched to HTML5 DTD and things were no longer broken. Hmmm...

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้