Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 10 min 40 sec ago

Vincent Sanders: Ampere was the Newton of Electricity.

14 January, 2016 - 07:08
I think Maxwell was probably right, certainly the unit of current Ampere gives his name to has been a concern of mine recently.

Regular readers may have possibly noticed my unhealthy obsession with single board computers. I have recently rehomed all the systems into my rack which threw up a small issue of powering them all. I had been using an ad-hoc selection of USB wall warts and adapters but this ended up needing nine mains sockets and short of purchasing a very expensive PDU for the rack would have needed a lot of space.

Additionally having nine separate convertors from mains AC to low voltage DC was consuming over 60Watts for 20W of load! The majority of these supplies were simply delivering 5V either via micro USB or DC barrel jack.

Initially I considered using a ten port powered USB hub but this seemed expensive as I was not going to use the data connections, it also had a limit of 5W per port and some of my systems could potentially use more power than that so I decided to build my own supply.

A quick look on ebay revealed that a 150W (30A at 5V) switching supply could be had from a UK vendor for £9.99 which seemed about right. An enclosure, fused and switched IEC inlet, ammeter/voltmeter with shunt and suitable cables were acquired for another £15

A little careful drilling and cutting of the enclosure made openings for the inlets, cables and display. These were then wired together with crimped and insulated spade and ring connectors. I wanted this build to be safe and reliable so care was taken to get the neatest layout I could manage with good separation between the low and high voltage cabling.

The result is a neat supply with twelve outputs which i can easily extend to eighteen if needed. I was pleasantly surprised to discover that even with twelve SBC connected generating 20W load the power drawn by the supply was 25W or about 80% efficiency instead of the 33% previously achieved.

The inbuilt meter allows me to easily see the load on the supply which so far has not risen above 5A even at peak draw, despite the cubitruck and BananaPi having spinning rust hard drives attached, so there is plenty of room for my SBC addiction to grow (I already pledged for a Pine64).

Overall I am pleased with how this turned out and while there are no detailed design files for this project it should be easy to follow if you want to repeat it. One note of caution though, this project has mains wiring and while I am confident in my own capabilities dealing with potentially lethal voltages I cannot be responsible for anyone else so caveat emptor!

Norbert Preining: Ian Buruma: Wages of Guilt

14 January, 2016 - 05:47

Since moving to Japan, I got more and more interested in history, especially the recent history of the 20th century. The book I just finished, Ian Buruma (Wiki, home page) Wages of Guilt – Memories of War in Germany and Japan (Independent, NYRB), has been a revelation for me. As an Austrian living in Japan, I am experiencing the discrepancy between these two countries with respect to their treatment of war legacy practically daily, and many of my blog entries revolve around the topic of Japanese non-reconciliation.

Willy Brandt went down on his knees in the Warsaw ghetto, after a functioning democracy had been established in the Federal Republic of Germany, not before. But Japan, shielded from the evil world, has grown into an Oskar Matzerath: opportunistic, stunted, and haunted by demons, which it tries to ignore by burying them in the sand, like Oskar’s drum.
Ian Buruma, Wages of Guilt, Clearing Up the Ruins

The comparison of Germany and Japan with respect to their recent history as laid out in Buruma’s book throws a spotlight on various aspects of the psychology of German and Japanese population, while at the same time not falling into the easy trap of explaining everything with difference in the guilt culture. A book of great depth and broad insights everyone having even the slightest interest in these topics should read.

This difference between (West) German and Japanese textbooks is not just a matter of detail; it shows a gap in perception.
Ian Buruma, Wages of Guilt, Romance of the Ruins

Only thinking about giving a halfway full account of this book is something impossible for me. The sheer amount of information, both on the German and Japanese side, is impressive. His incredible background (studies of Chinese literature and Japanese movie!) and long years as journalist, editor, etc, enriches the book with facets normally not available: In particular his knowledge of both the German and Japanese movie history, and the reflection of history in movies, were complete new aspects for me (see my recent post (in Japanese)).

The book is comprised of four parts: The first with the chapters War Against the West and Romance of the Ruins; the second with the chapters Auschwitz, Hiroshima, and Nanking; the third with History on Trial, Textbook Resistance, and Memorials, Museums, and Monuments; and the last part with A Normal Country, Two Normal Towns, and Clearing Up the Ruins. Let us look at the chapters in turn:

  • War Against the West
    This chapter sets the stage in two parts, Bonn and Tokyo, by comparing the reaction in these countries to the Iraq war. The German “Betroffenheit” (To be betroffen implies a sense of guilt, a sense of shame, or even embarrassment.) as the core of German post-war politics, literature, and media is introduced. On the Japanese side the difficult and diverse situation and attitudes towards the Iraq (and other) wars, as well as the necessary bits of post-war history and development of the Japanese constitution.

    What is so convenient in the cases of Germany and Japan is that pacifism happens to be a high-minded way to dull the pain of historical guilt. Or, conversely, if one wallows in it, pacifism turns national guilt into a virtue, almost a mark of superiority, when compared to the complacency of other nations.

  • Romance of the Ruins
    This chapter focuses on the war and immediate post-war period with references to the specific literature and movies emerging out of the circumstances of destroyed countries who have lost the war.

    Hitler’s doom and the emperor’s speech, the end of one symbol and the odd continuity of another. Whatever their symbolic differences, both would be associated forever with ruins—ruined cities, ruined people, ruined ideals.

  • Auschwitz
    The psychological construction of war memorials in both Germanies, which focuses on the religious aspects, is discussed, followed by an excursion through post-war German literature and the long-term ignorance of anything related to the Holocaust.

    Here the past had fossilized into something monumental or, as Adorno would have put it, museal.

  • Hiroshima

    Paralleling the previous chapter, Hiroshima introduces the simplistic and reduced focus of the Hiroshima memorials, mostly ignoring the foreign victims, many of them being Koreans forced to work in Japan, and concentrating on the Japanese martyrdom. Focusing on the atomic bomb event everything else is removed from the field of view.

    The problem with this quasi-religious view of history is that it makes it hard to discuss past events in anything but nonsecular terms. Visions of absolute evil are unique, and they are beyond human explanation or even comprehension. To explain is hubristic and amoral. If this is true of Auschwitz, it is even more true of Hiroshima. The irony is that while there can be no justification for Auschwitz unless one believes in Hitler’s murderous ideology, the case for Hiroshima is at least open to debate. The A-bomb might have saved lives; it might have shortened the war. But such arguments are incompatible with the Hiroshima spirit.

  • Nanking
    The history and aftermath, as well as the attempts of rejection and refutation of the Nanking massacre are described. The Tokyo Trials and their critique by governmental scholars are touched, as well as bit of fresh air blowing through the Japanese society after the death of Hirohito, which lead to the publication of the records of Nanking by Azuma Shiro 東 史郎.

    Yet the question remains whether the raping and killing of thousands of women, and the massacre of thousands, perhaps hundreds of thousands, of other unarmed people, in the course of six weeks, can still be called extreme conduct in the heat of battle. The question is pertinent, particularly when such extreme violence is justified by an ideology which teaches the aggressors that killing an inferior race is in accordance with the will of their divine emperor.

  • History on Trial
    One of the central chapters in my opinion. It discusses and compares the two post-war trials: The Nurnberg trials in Germany and the Tokyo trials in Japan. In both cases the juridical value is questioned, focusing on the winner-looser situation of post-war times.

    The Nuremberg trials were to be a history lesson, then, as well as a symbolic punishment of the German people—a moral history lesson cloaked in all the ceremonial trappings of due legal process. They were the closest that man, or at least the men belonging to the victorious powers, could come to dispensing divine justice.

    Also, the differences in war trials in East and West Germany is compared. The East Germany Waldheimer trials, as well as the thorough purge of Nazis from East German jurisdiction and politics, which was in stark contrast to both West German’s very restricted trials, as well as Japan’s absolute non-purge of criminals.

    As long as the emperor lived, Japanese would have trouble being honest about the past. For he had been formally responsible for everything, and by holding him responsible for nothing, everybody was absolved, except, of course, for a number of military and civilian scapegoats, Officers and Outlaws, who fell “victim to victors’ justice.

  • Textbook Resistance

    This chapter compares the representation of war and post-war times in the textbooks in West and East Germany and Japan. The interesting case of Ienaga Saburo 家永 三郎 and the year-long trials (1965-1993) around his history textbook are recounted. The ministry of education had forced a redaction of his history textbook to conform with the revisionist view onto history, deleting most passages that are critical of the Japanese position during the first half of the 20th century. This was one of the very few cases in Japanese post-war history where someone stood up against this revisionist view.

    The judges and some of the counsel for the ministry sat back with their eyes closed, in deep concentration, or fast asleep. Perhaps they were bored, because they had heard it all before. Perhaps they thought it was a pointless exercise, since they knew already how the case would end. But it was not a pointless exercise. For Ienaga Saburo had kept alive a vital debate for twenty-seven years. One cussed schoolteacher and several hundred supporters at the courthouse might not seem much, but it was enough to show that, this time, someone was fighting back.

  • Memorials, Museums, and Monuments
    This chapter returns to war memorials: The change of meaning from post WW-1, which were memorials, to post WW-2 ones which became warning monuments, indicating the shift of attention and evaluation of war history in Germany. In contrast to this, Japan’s quasi non-existence of war museums till the late 90ies, as well as the existence of the Yasukuni shrine honoring and celebrating besides other several A-class war criminals as deities.

    The tragedy is not just that the suicide pilots died young. Soldiers (and civilians) do that in wars everywhere. What is so awful about the memory of their deaths is the cloying sentimentality that was meant to justify their self-immolation. There is no reason to suppose they didn’t believe in the patriotic gush about cherry blossoms and sacrifice, no matter how conventional it was at the time. Which was exactly the point: they were made to rejoice in their own death. It was the exploitation of their youthful idealism that made it such a wicked enterprise. And this point is still completely missed at the Peace Museum today.

  • A Normal Country

    This chapter discusses the slow normalization of post-war situation after the 90ies, and all the hurdles that needed to be overcome: In the case of Germany the speech of Philipp Jenninger, then president of the Bundestag, is recounted. 50 years after the Kristallnacht he tried to give a speech of “historicization”, only to be find himself shunned and expelled due to the lack of Betroffenheit.

    It was not an ignoble enterprise, but he should have recognized that Historisierung, even forty-three years after the war, was still a highly risky business. For a “normal” society, a society not haunted by ghosts, cannot be achieved by “normalizing” history, or by waving cross and garlic. More the other way around: when society has become sufficiently open and free to look back, from the point of view neither of the victim nor of the criminal, but of the critic, only then will the ghosts be laid to rest.

    On the Japanese side the case of Motoshima Hitoshi 本島 等, who dared to question Hirohito:

    Forty-three years have passed since the end of the war, and I think we have had enough chance to reflect on the nature of that war. From reading various accounts from abroad and having been a soldier myself, involved in military education, I do believe that the emperor bore responsibility for the war.

    which led to hitherto unseen of demonstration of extreme-right-wing groups issuing death treats that lead to a failed assassination of Motoshima, all under the completely complacent Japanese police and politics letting the right-wingers play their game.

    By breaking a Japanese taboo, Motoshima struck a blow for a more open, more normal political society, and very nearly lost his life. Jenninger, I like to think, wanted to strike a blow for the same, but failed, and lost his job. Perhaps he wasn’t up to the task. Or perhaps even West Germany was not yet normal enough to hear his message.

  • Two Normal Towns

    This chapter focuses on two rare cases of civil courage and political commitment: Anja Rosmus, who stepped forth as school child to rewrite the history of Passau. She unveiled the truth about deep involvement into the NS crimes of many inhabitants of Passau, a fact that was up to then covered up and purged from knowledge. She, too, received many death threads, including nailing a killed cat onto her door. The response of the head of the tourist office in Passau, Gottfried Dominik, speaks about the very peculiar attitude:

    I asked him again about the local camp and the small hidden memorial. Dominik showed signs of distress. “It was difficult,” he admitted, “very difficult. I know what you mean. But let me give you my personal opinion. When you have a crippled arm, you don’t really want to show it around. It was a low point in our history, back then. But it was only twelve years in thousands of years of history. And so people tend to hide it, just as a person with a crippled arm is not likely to wear a short-sleeved shirt.”

    A similar incident is recounted on the Japanese side, the Hanaoka incident (detailed article) and its unveiling by Nozoe Kenji, where 800 Chinese slave workers, after escaping from a forced-work camp for the Kajima Corporation, where rabbit-hunted down and slaughtered. He, too, got death threats, and was virtually expelled from his home area because he dared to publish his findings.

    I think it is this basic distrust, this refusal to be told what to think by authorities, this cussed insistence on asking questions, on hearing the truth, that binds together Nozoe, Rosmus, and others like them. There are not many such people in Japan, or anywhere else for that matter. And I suspect they are not much liked wherever they live.

  • Clearing Up the Ruins

    The last chapter tries to round up all the previous chapters, and look into the most recent history and near future. While not completely pessimistic with respect to Japan, the final chapter leaves clear statements on the current state of Japanese society and politics:

    The state was run by virtually the same bureaucracy that ran the Japanese empire, and the electoral system was rigged to help the same corrupt conservative party to stay in power for almost forty years. This arrangement suited the United States, as well as Japanese bureaucrats, LDP politicians, and the large industrial combines, for it ensured that Japan remained a rich and stable ally against Communism. But it also helped to stifle public debate and stopped the Japanese from growing up politically.

    His description of current Japanese society, written in 1995, is still hauntingly true in 2016:

    There is something intensely irritating about the infantilism of postwar Japanese culture: the ubiquitous chirping voices of women pretending to be girls; the Disneylandish architecture of Japanese main streets, where everything is reduced to a sugary cuteness; the screeching “television talents” rolling about and carrying on like kindergarten clowns; the armies of blue-suited salarymen straphanging on the subway trains, reading boys’ comics, the maudlin love for old school songs and cuddly mama-sans.

The boook somehow left me with a bleak impression of Japanese post-war times as well as Japanese future. Having read other books about the political ignorance in Japan (Norma Field’s In the realm of a dying emperor, or the Chibana history), Buruma’s characterization of Japanese politics is striking. He couldn’t foresee the recent changes in legislation pushed through by the Abe government actually breaking the constitution, or the rewriting of history currently going on with respect to comfort women and Nanking. But reading his statement about Article Nine of the constitution and looking at the changes in political attitude, I am scared about where Japan is heading to:

The Nanking Massacre, for leftists and many liberals too, is the main symbol of Japanese militarism, supported by the imperial (and imperialist) cult. Which is why it is a keystone of postwar pacifism. Article Nine of the constitution is necessary to avoid another Nanking Massacre. The nationalist right takes the opposite view. To restore the true identity of Japan, the emperor must be reinstated as a religious head of state, and Article Nine must be revised to make Japan a legitimate military power again. For this reason, the Nanking Massacre, or any other example of extreme Japanese aggression, has to be ignored, softened, or denied.
Ian Buruma, Wages of Guilt, Nanking

While there are signs of resistance in the streets of Japan (Okinawa and the Hanako bay, the demonstrations against secrecy law and reversion of the constitution), we are still to see a change influenced by the people in a country ruled and distributed by oligarchs. I don’t think there will be another Nanking Massacre in the near future, but Buruma’s books shows that we are heading back to a nationalistic regime similar to pre-war times, just covered with a democratic veil to distract critics.

I close with several other quotes from the book that caught my attention:

In the preface and introduction:

[…] mainstream conservatives made a deliberate attempt to distract people’s attention from war and politics by concentrating on economic growth.

The curious thing was that much of what attracted Japanese to Germany before the war—Prussian authoritarianism, romantic nationalism, pseudo-scientific racialism—had lingered in Japan while becoming distinctly unfashionable in Germany.

In Romance of the Ruins:

The point of all this is that Ikeda’s promise of riches was the final stage of what came to be known as the “reverse course,” the turn away from a leftist, pacifist, neutral Japan—a Japan that would never again be involved in any wars, that would resist any form of imperialism, that had, in short, turned its back for good on its bloody past. The Double Your Incomes policy was a deliberate ploy to draw public attention away from constitutional issues.

In Hiroshima:

The citizens of Hiroshima were indeed victims, primarily of their own military rulers. But when a local group of peace activists petitioned the city of Hiroshima in 1987 to incorporate the history of Japanese aggression into the Peace Memorial Museum, the request was turned down. The petition for an “Aggressors’ Corner” was prompted by junior high school students from Osaka, who had embarrassed Peace Museum officials by asking for an explanation about Japanese responsibility for the war.

The history of the war, or indeed any history, is indeed not what the Hiroshima spirit is about. This is why Auschwitz is the only comparison that is officially condoned. Anything else is too controversial, too much part of the “flow of history”.

In Nanking, by the governmental pseudo-historian Tanaka:

“Unlike in Europe or China,” writes Tanaka, “you won’t find one instance of planned, systematic murder in the entire history of Japan.” This is because the Japanese have “a different sense of values” from the Chinese or the Westerners.”

In History on Trial:

In 1950, Becker wrote that “few things have done more to hinder true historical self-knowledge in Germany than the war crimes trials.” He stuck to this belief. Becker must be taken seriously, for he is not a right-wing apologist for the Nazi past, but an eminent liberal.

There never were any Japanese war crimes trials, nor is there a Japanese Ludwigsburg. This is partly because there was no exact equivalent of the Holocaust. Even though the behavior of Japanese troops was often barbarous, and the psychological consequences of State Shinto and emperor worship were frequently as hysterical as Nazism, Japanese atrocities were part of a military campaign, not a planned genocide of a people that included the country’s own citizens. And besides, those aspects of the war that were most revolting and furthest removed from actual combat, such as the medical experiments on human guinea pigs (known as “logs”) carried out by Unit 731 in Manchuria, were passed over during the Tokyo trial. The knowledge compiled by the doctors of Unit 731—of freezing experiments, injection of deadly diseases, vivisections, among other things—was considered so valuable by the Americans in 1945 that the doctors responsible were allowed to go free in exchange for their data.

Some Japanese have suggested that they should have conducted their own war crimes trials. The historian Hata Ikuhiko thought the Japanese leaders should have been tried according to existing Japanese laws, either in military or in civil courts. The Japanese judges, he believed, might well have been more severe than the Allied tribunal in Tokyo. And the consequences would have been healthier. If found guilty, the spirits of the defendants would not have ended up being enshrined at Yasukuni. The Tokyo trial, he said, “purified the ‘crimes’ of the accused and turned them into martyrs. If they had been tried in domestic courts, there is a good chance the real criminals would have been flushed out.”

After it was over, the Nippon Times pointed out the flaws of the trial, but added that “the Japanese people must ponder over why it is that there has been such a discrepancy between what they thought and what the rest of the world accepted almost as common knowledge. This is at the root of the tragedy which Japan brought upon herself.

Emperor Hirohito was not Hitler; Hitler was no mere Shrine. But the lethal consequences of the emperor-worshipping system of irresponsibilities did emerge during the Tokyo trial. The savagery of Japanese troops was legitimized, if not driven, by an ideology that did not include a Final Solution but was as racialist as Hider’s National Socialism. The Japanese were the Asian Herrenvolk, descended from the gods.

Emperor Hirohito, the shadowy figure who changed after the war from navy uniforms to gray suits, was not personally comparable to Hitler, but his psychological role was remarkably similar.

In fact, MacArthur behaved like a traditional Japanese strongman (and was admired for doing so by many Japanese), using the imperial symbol to enhance his own power. As a result, he hurt the chances of a working Japanese democracy and seriously distorted history. For to keep the emperor in place (he could at least have been made to resign), Hirohito’s past had to be freed from any blemish; the symbol had to be, so to speak, cleansed from what had been done in its name.

In Memorials, Museums, and Monuments:

If one disregards, for a moment, the differences in style between Shinto and Christianity, the Yasukuni Shrine, with its “relics,” its “sacred ground,” its bronze paeans to noble sacrifice, is not so very different from many European memorials after World War I. By and large, World War II memorials in Europe and the United States (though not the Soviet Union) no longer glorify the sacrifice of the fallen soldier. The sacrificial cult and the romantic elevation of war to a higher spiritual plane no longer seemed appropriate after Auschwitz. The Christian knight, bearing the cross of king and country, was not resurrected. But in Japan, where the war was still truly a war (not a Holocaust), and the symbolism still redolent of religious exultation, such shrines as Yasukuni still carry the torch of nineteenth-century nationalism. Hence the image of the nation owing its restoration to the sacrifice of fallen soldiers.

In A Normal Country:

The mayor received a letter from a Shinto priest in which the priest pointed out that it was “un-Japanese” to demand any more moral responsibility from the emperor than he had already taken. Had the emperor not demonstrated his deep sorrow every year, on the anniversary of Japan’s surrender? Besides, he wrote, it was wrong to have spoken about the emperor in such a manner, even as the entire nation was deeply worried about his health. Then he came to the main point: “It is a common error among Christians and people with Western inclinations, including so-called intellectuals, to fail to grasp that Western societies and Japanese society are based on fundamentally different religious concepts . . . Forgetting this premise, they attempt to place a Western structure on a Japanese foundation. I think this kind of mistake explains the demand for the emperor to bear full responsibility.”

In Two Normal Towns:

The bust of the man caught my attention, but not because it was in any way unusual; such busts of prominent local figures can be seen everywhere in Japan. This one, however, was particularly grandiose. Smiling across the yard, with a look of deep satisfaction over his many achievements, was Hatazawa Kyoichi. His various functions and titles were inscribed below his bust. He had been an important provincial bureaucrat, a pillar of the sumo wrestling establishment, a member of various Olympic committees, and the recipient of some of the highest honors in Japan. The song engraved on the smooth stone was composed in praise of his rich life. There was just one small gap in Hatazawa’s life story as related on his monument: the years from 1941 to 1945 were missing. Yet he had not been idle then, for he was the man in charge of labor at the Hanaoka mines.

In Clearing Up the Ruins:

But the question in American minds was understandable: could one trust a nation whose official spokesmen still refused to admit that their country had been responsible for starting a war? In these Japanese evasions there was something of the petulant child, stamping its foot, shouting that it had done nothing wrong, because everybody did it.

Japan seems at times not so much a nation of twelve-year-olds, to repeat General MacArthur’s phrase, as a nation of people longing to be twelve-year-olds, or even younger, to be at that golden age when everything was secure and responsibility and conformity were not yet required.

For General MacArthur was right: in 1945, the Japanese people were political children. Until then, they had been forced into a position of complete submission to a state run by authoritarian bureaucrats and military men, and to a religious cult whose high priest was also formally chief of the armed forces and supreme monarch of the empire.

I saw Jew Süss that same year, at a screening for students of the film academy in Berlin. This showing, too, was followed by a discussion. The students, mostly from western Germany, but some from the east, were in their early twenties. They were dressed in the international uniform of jeans, anoraks, and work shirts. The professor was a man in his forties, a ’68er named Karsten Witte. He began the discussion by saying that he wanted the students to concentrate on the aesthetics of the film more than the story. To describe the propaganda, he said, would simply be banal: “We all know the ‘what,’ so let’s talk about the ‘how.’” I thought of my fellow students at the film school in Tokyo more than fifteen years before. How many of them knew the “what” of the Japanese war in Asia.

Keith Packard: auto-calendar

14 January, 2016 - 05:01
Automatic Calendar Management — Notmuch + Calypso

One of the big “features” of outlook/exchange in my world is automatically merging of incoming calendar updates from email. This makes my calendar actually useful in knowing what meetings people have asked me to attend. As I'm not willing to otherwise tolerate outlook, I decided to try and provide that in my preferred environment; notmuch and calypso.

Identifying calendar updates

The first trick is how to identify incoming messages with calendar updates. I'd love to be able to search for specific mime content types, but I haven't been able to construct such a search. Failing that, I'm just looking for messages containing the string 'text/calendar':

notmuch search --output=messages tag:inbox AND text/calendar

Next, I want to skip over previously scanned calendar updates, so I'll plan on tagging messages that have been scanned with the 'calendar' tag and skip those:

notmuch search --output=messages tag:inbox AND text/calendar AND not tag:calendar
jq — sed for json

With the messages containing likely calendar entries identified, the remaining task is to extract the MIME section containing the actual calendar data. Notmuch can generate json for the message, leaving us only needing to parse the json and extract the relevant section. I found the 'jq' tool in the archive, which looks like a rather complicated parsing and reformatting tool for json data. It took a while to understand, but I managed to generate a command string that takes a notmuch message and pulls out the content for all text/calendar elements:

jq -r '..| select(."content-type"? == "text/calendar") | .content'

This is a recursive walk over the data structure. It looks for structures with "content-type": "text/calendar" and dumps their "content" elements in raw text form.

Putting it all together

Here's the complete script:

#!/bin/bash

SEARCH="tag:inbox AND not tag:calendar AND text/calendar"

TMP=`mktemp`

trap "rm -r $TMP" 0 1 15

notmuch search --output=messages $SEARCH | while read message; do
    notmuch show --format=json $message | 
        jq -r '.. | select(."content-type"? == "text/calendar") | .content' > $TMP
    if [ -s $TMP ]; then
        calypso --import private/calendar $TMP && notmuch tag +calendar $message
    else
        notmuch tag +calendar $message
    fi
done

I'd love to fix calypso's --import operation to talk to the local calypso daemon with the database loaded; the current mechanism actually loads the entire database into a new process and then applies the new data to that. With my calendar often containing hundreds of entries, that takes a while.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2015

14 January, 2016 - 00:12

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December, 113.50 work hours have been dispatched among 9 paid contributors. Their reports are available:

  • Antoine Beaupré did 8h for his first month of work on LTS.
  • Ben Hutchings did 20 hours (out of 15 hours allocated + 15 extra hours remaining, meaning that he has 10 extra hours to do over January).
  • Chris Lamb did 12 hours.
  • Guido Günther did 9 hours (out of 8 hours allocated + 2 remaining, thus keeping 1 extra hour for January).
  • Mike Gabriel did nothing (the 8 hours allocated are carried over for January).
  • Raphaël Hertzog did 21.25 hours (18h allocated + 3.25h taken over from Mike’s unused hours of November).
  • Santiago Ruano Rincón did 15 hours (out of 18.25h allocated + 2 remaining + 3.25 taken over from Mike’s unused hours of November, thus keeping 8.50 extra hours for January).
  • Scott Kitterman did 8 hours.
  • Thorsten Alteholz did 21.25 hours (out of 18.25h allocated + 3 hours taken over from Mike’s unused hours of November).
Evolution of the situation

We lost our first silver sponsor (Gandi.net, they prefer to give the same amount of money to Debian directly) and another sponsor reduced his sponsorship level. While this won’t show in the hours dispatched in January, we will do a small jump backwards in February (unless we get new sponsors replacing those in the next 3 weeks).

This is a bit unfortunate as we are rather looking at reinforcing the amount of sponsorship we get as we approach Wheezy LTS and we will need more support to properly support virtualization related packages and other packages that were formerly excluded from Squeeze LTS. Can you convince your company and help us reach our second goal?

In terms of security updates waiting to be handled, the situation is close to last month. It looks like that having about 20 packages needing an update is the normal situation and that we can’t really get further down given the time required to process some updates (sometimes we wait until the upstream authors provides a patch, and so on).

Thanks to our sponsors

We got one new bronze sponsor but he’s not listed (he did not fill the form where we request their permission to be listed).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Rhonda D'Vine: 2016 Resolutions

13 January, 2016 - 08:51

People these days often do think about what worked well in the last year that they are proud of, what didn't work so well and what they plan to change the coming year. For me a fair amount of the resolutions were about my name. One of them was getting rid of my old name from the Debian—Project Participants page. Actually, I started with it on new year's eve already:

DatePackageVersion Dec 31abook0.6.1-1 Jan 01tworld1.3.2-1 Jan 01blosxom2.1.2-2 Jan 02netris0.52-10 Jan 03t-prot3.4-4 Jan 04rungetty1.2-16 Jan 05tworld1.3.2-2 Jan 06tetrinet0.11+CVS20070911-2 Jan 07xblast-tnt-musics20050106-3 Jan 08xblast-tnt-sounds20040429-3 Jan 09xblast-tnt-levels20050106-3 Jan 10xblast-tnt-images20050106-3 Jan 11tetradraw2.0.3-9 Jan 12ldapvi1.7-10

So far I've done a fair amount of my job. There are eight source package left to get tweaked. Those might be a bit more difficult and require more attention though. What I also did during those efforts: Convert all packages to source format 3.0 (quilt), and use a dh style debian/rules file. The latter enabled the packages to build reproducible too, which is also an added benefit. So this is a win situation on many levels.

One of the most prominent reasons why I didn't convert to a dh style debian/rules file yet was that I considered it making easy things easy and difficult things difficult. Finding out what to override and how to do that was something I was unable to figure out, and speaking with people didn't help me there neither. Only recently someone told me that there is dh binary --no-act to figure out what would be called, and then you just prefix it with override_ in debian/rules to get to where you want to go. This worked extremely well for me.
I'm personally still not a big fan of source format 3.0 (quilt) with respect to that it insists on patches to be applied and leaves them that way after building the source package, which makes it difficult to deal with when having upstream source in the VCS too, but I managed to find my way around so many things in the past that I can live with that. The benefit of not having to repack upstream source if it isn't in .gz form is a far bigger benefit.

So, I hope to stay productive and be able to get the remaining package also adjusted and fixed. Guess that's doable until the end of the month, and getting rid of all reproducible build bugreports against my packages along that lines. I will check those packages that carry my name already too after my old name is gone from the overview page.

/debian | permanent link | Comments: 0 | Flattr this

Antoine Beaupré: The Downloadable Internet

13 January, 2016 - 00:07
How corporations killed the web

I have read with fascination what we would have called before a blog post, except it was featured on The Guardian: Iran's blogfather: Facebook, Instagram and Twitter are killing the web The "blogfather" is Hossein Derakshan or h0d3r, an author from Teheran that was jailed for almost a decade for his blogging. The article is very interesting both because it shows how fast things changed in the last few years, technology-wise, but more importantly, how content-free the web have become, where Facebook's last acquisition, Instagram, is not even censored by Iran. Those platforms have stopped being censored, not because of democratic progress but because they have become totally inoffensive (in the case of Iran) or become a tool of surveillance for the government and targeted advertisement for companies (in the case of, well, most of the world).

This struck a chord, personally, at the political level: we are losing control of the internet (if we ever had it). The defeat isn't directly political: we have some institutions like ICANN and the IETF that we can still have an effect on, even if only at the technological level. The defeat is economic, and, of course, through economy comes enormous power. That defeat meant that we have first lost free and open access to the internet (yes, dialup used to be free) and then free hosting of our content (no, Google and Facebook are not free, you are the product). This marked a major change in the way content is treated online.

H0d3r explains this as the shift from a link-based internet to a stream-based internet, a "deparure from a books-internet towards a television-internet". I have been warning about this "television-internet" in my talks and conversation for a while and with Netflix taking the crown off Youtube (and making you pay for it, of course), we can assuredly say that H0d3r is right and the television, far from disappearing, is finally being resurrected and taking over the internet.

The Downloadable internet and open standards

But I would like to add to that: it is not merely that we had "links" before. We had, and still have, open standards. This made the internet "downloadable" (and by extension, uploadable) and decentralized.

(In fact, I still remember my earlier days on the web when I would actually download images (as in "right-click" and "Save as..." images, not just have the browser download and display it on the fly). I would download images because they were big! It could take a minute or sometimes more to download images on older modems. Later, I would do the same with music: I would download WAV files before the rise of the MP3 format, of which I ended up building a significant collection (just fair use copies from friends and owned CDs, of course) and eventually video files.)

The downloadable internet is what still allows me to type this article in a text editor, without internet access, while reading H0d3r's blog post on my e-reader, because I downloaded his article off an RSS feed. It is what makes it possible for anyone to download a full copy of this blog post and connected web pages as a git repository and this way get the full history of modifications on all the pages, but also be able to edit it offline and push modifications back in.

Wikipedia is downloadable (there are even offline apps for your phone). Open standards like RSS feeds and HTML are downloadable. Heck, even the Internet Archive is downloadable (and I mean, all of it, not just the parts you want), surprisingly enough.

The app-based internet and proprietary software

App-based websites like Google Plus and Facebook are not really downloadable. They are meant to be browsed through an app, so what you actually see through your web browser is really more an application, downloaded software than a downloaded piece of content. If you turn off Javascript, you will see that visiting Facebook actually shows no content: everything is downloaded on the fly by an application itself downloaded, on the fly, by your browser. In a way, your browser has become an operating system that runs proprietary, untrusted and unverified applications from the web.

(The software is generally completely proprietary, except some frameworks that are published as free software in what looks like the lenient act of a godly king, but is actually more an economic decision of a clever corporation which outsources, for free, R&D and testing to the larger free software community. The real "secret sauce" is basically always proprietary, if only so that we don't freak out on stuff like PRISM that reports everything we do to the government.)

Technology is political. This new "app design" is not a simple optimization or an cosmetic accident of a fancy engineer: by moving content through an application, Facebook, Twitter and the like can see exactly what you do on a web page, what you actually read (as opposed to what you click on) and how long. By adding a proprietary interface between you and the content online, the advertisement-surveillance complex can track every move you make online.

This is a very fine-tuned surveillance system, and because of the App, you cannot escape it. You cannot share the content outside of Facebook, as you can't download it. Or at least, it's not obvious how you can. Projects like youtube-dl are doing an amazing job reverse-engineering what is becoming the proprietary Youtube streaming protocol, which is constantly changing and is not really documented. But it's a hack: it's a Sisyphus struggle which is bound to fail, and it does, all the time, until we figure out how to either turn those corporations into good netizens respecting and contributing to open standards (unlikely) or destroy those corporations (most likely).

You are trapped in their walled garden. No wonder internet.org is Facebook only: for most people nowadays, the internet is the web, and the web is Facebook, Twitter and Google, or an iPad with a bunch of apps, each their own cute little walled garden, crafted just for you. If you think you like the Internet, you should really reconsider what you are watching, what you are consuming, or rather, how it is consuming you. There are alternatives. Facebook is a though nut to crack for free software activists because we lack the critical mass. But Facebook it is also an addiction for a lot of people, and spending less time on that spying machine could be a great improvement for you I am sure. For everything else, we have good free software alternatives and open standards, use them.

"Big brother ain't watching you, you're watching him." - CRASS, Nineteen Eighty Bore (audio)

Michal Čihař: Weekly phpMyAdmin contributions

13 January, 2016 - 00:00

Going back to real weekly report, this time covering first week of 2016.

The biggest task was focused on codebase cleanup. As Microsoft is ending support for old Internet Explorer version, we've decided to do same thing for next major release. This allowed us to remove some compatibility code and also upgrade jQuery to 2.x branch, which removes support for older browsers as well.

To continue in the cleanup tasks, I've revisited most of array iterating places and removed not needed reset() calls or generally cleanup up related code.

Besides working directly on the code, I've improved our infrastructure a bit as well and we now have developer documentation online at https://develdocs.phpmyadmin.net/. It is generated using phpdox, but suggestions to improve it are welcome.

All handled issues:

Filed under: English phpMyAdmin | 0 comments

Gunnar Wolf: Readying up for the second "Embedded Linux" diploma course

12 January, 2016 - 23:28

I am happy to share here a project I was a part of during last year, that ended up being a complete success and now stands to be repeated: The diploma course on embedded Linux, taught at Facultad de Ingeniería, UNAM, where I'm teaching my regular classes as well.

Back in November, we held the graduation for our first 10 students. This photo shows only seven, as the remaining three have already relocated to Guadalajara, where they were hired by Continental, a company that promoted the creation of this specialization program.

After this first excercise, we went over the program and made some adequations; future generations will have a shorter and more focused program (240 instead of 288 hours, leaving out several topics that were not deemed related to the topic or were thoroughly understood by students to begin with); we intend to start the semester-long course in early February. I will soon update here with the full program and promotional material, as soon as I receive it.

I am specially glad that this course is taught by people I admire and recognize, and a very interesting mix between long-time academic and stemming from my free-software-related friends: From the academic side, Facultad de Ingeniería's professors Laura Sandoval, Karen Sáenz and Oscar Valdez, and from the free-software side, Sandino Araico, Iván Chavero, César Yáñez and Gabriel Saldaña (and myself on both camps, of course ☺)

AttachmentSize OT401_nota.jpg75.82 KB

Bits from Debian: New Debian Developers and Maintainers (November and December 2015)

12 January, 2016 - 18:30

The following contributors got their Debian Developer accounts in the last two months:

  • Stein Magnus Jodal (jodal)
  • Prach Pongpanich (prach)
  • Markus Koschany (apo)
  • Bernhard Schmidt (berni)
  • Uwe Kleine-König (ukleinek)
  • Timo Weingärtner (tiwe)
  • Sebastian Andrzej Siewior (bigeasy)
  • Mattia Rizzolo (mattia)
  • Alexandre Viau (aviau)
  • Lev Lamberov (dogsleg)
  • Adam Borowski (kilobyte)
  • Chris Boot (bootc)

The following contributors were added as Debian Maintainers in the last two months:

  • Alf Gaida
  • Andrew Ayer
  • Marcio de Souza Oliveira
  • Alexandre Detiste
  • Dave Hibberd
  • Andreas Boll
  • Punit Agrawal
  • Edward Betts
  • Shih-Yuan Lee
  • Ivan Udovichenko
  • Andrew Kelley
  • Benda Xu
  • Russell Sim
  • Paulo Roberto Alves de Oliveira
  • Marc Fournier
  • Scott Talbert
  • Sergio Durigan Junior
  • Guillaume Turri
  • Michael Lustfield

Congratulations!

Russell Coker: Sociological Images 2015

12 January, 2016 - 18:13

The above sign was at the Melbourne Docks in December 2014 when I was returning from a cruise. I have no idea why there are 3 men and 1 woman on the sign (and a dock worker was also surprised when I explained why I was photographing it). I wonder whether a sign that had 3 women and 1 man would ever have been installed or not noticed if it was installed.

At the start of the first day of LCA 2015 the above was displayed at the keynote as a flow-chart for whether someone should ask a question at a lecture. Given that the first real item in the list is that a question should fit in a tweet I think it was inspired by my blog post about the length of conference questions [1].

At the introduction to the Astronomy Miniconf the above slide was displayed. In addition to referencing the flow-chart for asking questions it recommends dimming laptop screens (among other things).

The above sign was at a restaurant in Auckland in January 2015. I thought that sort of sexist “joke” went out of fashion a few decades ago.

The above photo is from a Melbourne department store in February 2015. Why gender a nerf gun? That just doesn’t make sense. Also it appeared that the only nerf crossbow was the purple/pink one, is a crossbow considered feminine nowadays?

The above picture is a screen-shot of one of the “Talking Angela” series of Android games from March. Appropriating the traditional clothing of marginalised groups is a bad thing. People of Native American heritage who want to wear their traditional clothing face discrimination when they do so, when white people play dress-up in clothing that is a parody of Native American style it’s really offensive. The site Racialicious.com has a tag for articles about appropriation [2].

The above was in a library advertising an Ebook reader. In this case they didn’t even have pointlessly gendered products they just had pointlessly gendered adverts for the same product. They also perpetuate the myth that only girls read vampire books and only boys read about space. Also why is the girl lying down to read while the boy is sitting up?

Above is an Advent calendar on sale in a petrol station. Having end of year holiday presents that have nothing to do with religious festivals makes sense. But Advent is a religious observance. I think this would be a better candidate for “war on Christmas” paranoia than a coffee cup of the wrong colour.

The above photo is of boys and girls pipette suckers. Pointlessly gendered recreational products like Nerf guns is one thing, but I think that doing it to scientific equipment is a bigger problem. Are scientists going to stop work if they can’t find a pipette sucker of the desired gender? Is worrying about this going to distract them from their research (really bad if working with infectious or carcinogenic solutions). The Integra advertising claims to be doing this to promote breast cancer research which is also bogus. Here is a Sociological Images article about the problems of using pink to market breast cancer research [3] and the Sociological Images post about pinkwashing (boobies against breast cancer) is also worth reading [4].

As an aside I made a mistake in putting a pipette sucker over the woman’s chest in that picture. The way that Integra portreyed her chest is relevant to analysis of this advert. But unfortunately I didn’t photograph that.

Here is a link to my sociological images post from 2014 [5].

Related posts:

  1. Sociological Images 2014 White Trash The above poster was on a bridge pylon...
  2. Sociological Images 2012 In 2011 I wrote a post that was inspired by...
  3. Sociological Images I’ve recently been reading the Sociological Images blog [1]. That...

Zlatan Todorić: Interesting? :)

12 January, 2016 - 09:13

On Distrowatch Debian has more points then Ubuntu and Red Hat combined - coincidence? I don't think so! ;)

Norbert Preining: 10 years TeX Live in Debian

12 January, 2016 - 06:43

I recently dug through my history of involvement with TeX (Live), and found out that in January there are a lot of “anniversaries” I should celebrate: 14 years ago I started building binaries for TeX Live, 11 years ago I proposed the packaging TeX Live for Debian, 10 years ago the TeX Live packages entered Debian. There are other things to celebrate next year (2017), namely the 10 year anniversary of the (not so new anymore) infrastructure – in short tlmgr – of TeX Live packaging, but this will come later. In this blog post I want to concentrate on my involvement in TeX Live and Debian.

Those of you not interested in boring and melancholic look-back onto history can safely skip reading this one. For those a bit interested in the history of TeX in Debian, please read on.

Debian releases and TeX systems

The TeX system of choice has been for long years teTeX, curated by Thomas Esser. Digging through the Debian Archive and combining it with changelog entries as well as personal experiences since I joined Debian, here is a time line of TeX in Debian, all to my best knowledge.

Date Version Name teTeX/TeX Live Maintainers 1993-96 <1 ? ? Christoph Martin 6/1996 1.1 Buzz ? 12/1996 1.2 Rec ? 6/1997 1.3 Bo teTeX 0.4 7/1998 2.0 Ham teTeX 0.9 3/1999 2.1 Slink teTeX 0.9.9N 8/2000 2.2 Potato teTeX 1.0 7/2002 3.0 Woody teTeX 1.0 6/2005 3.1 Sarge teTeX 2.0 Atsuhito Kohda 4/2007 4.0 Etch teTeX 3.0, TeX Live 2005 Frank Küster, NP 2/2009 5.0 Lenny TeX Live 2007 NP 2/2011 6.0 Squeeze TeX Live 2009 5/2013 7.0 Whezzy TeX Live 2012 4/2015 8.0 Jessie TeX Live 2014 ??? ??? Stretch TeX Live ≥2015

The history of TeX in Debian is thus split more or less in 10 years teTeX, and 10 years TeX Live. While I cannot check back to the origins, my guesses are that already in the very first releases (te)TeX was included. The first release I can confirm (via the Debian archive) shipping teTeX is the release Bo (June 1997). Maintainership during the first 10 years showed some fluctuation: The first years/releases (till about 2002) were dominated by Christoph Martin with Adrian Bunk and few others, who did most packaging work on teTeX version 1. After this Atsuhito Kohda with help from Hilmar Preusse and some people brought teTeX up to version 2, and from 2004 to 2007 Frank Küster, again with help of Hilmar Preusse and some other, took over most of the work on teTeX. Other names appearing throughout the changelog are (incomplete list) Julian Gilbey, Ralf Stubner, LaMont Jones, and C.M Connelly (and many more bug-reporters and fixers).

Looking at the above table I have to mention the incredible amount of work that both Atsuhito Kohda and Frank Küster have put into the teTeX packages, and many of their contributions have been carried over into the TeX Live packages. While there haven’t been many releases during their maintainership, their work has inspired and supported the packaging of TeX Live to a huge extend.

Start of TeX Live

I got involved in TeX Live back in 2002 when I started building binaries for the alpha-linux architecture. I can’t remember when I first had the idea to package TeX Live for Debian, but here is a time line from my first email to the Debian Developers mailing list concerning TeX Live, to the first accepted upload:

Date Subject/Link Comment 2005-01-11 binaries for different architectures in debian packages The first question concerning packaging TeX Live, about including pre-built binaries 2005-01-25 Debian-TeXlive Proposal II A better proposal, but still including pre-built binaries 2005-05-17 Proposal for a tex-base package Proposal for tex-base, later tex-common, as basis for both teTeX and TeX live packages 2015-06-10 Bug#312897: ITP: texlive ITP bug for TeX Live 2005-09-17 Re: Take over of texinfo/info packages Taking over texinfo which was somehow orphaned started here 2005-11-28 Re: texlive-basic_2005-1_i386.changes REJECTED My answer to the rejection by ftp-master of the first upload. This email sparked a long discussion about packaging and helped improve the naming of packages (but not really the packaging itself). 2006-01-12 Upload of TeX Live 2005-1 to Debian The first successful upload 2006-01-22 Accepted texlive-base 2005-1 (source all) TeX Live packages accepted into Debian/experimental

One can see from the first emails that at that time I didn’t have any idea about Debian packaging and proposed to ship the binaries built within the TeX Live system on Debian. What followed was first a long discussion about whether there is any need for just another TeX system. The then maintainer Frank Küster took a clear stance in favor of including TeX Live, and after several rounds of proposals, tests, rejections and improvements, the first successful upload of TeX Live packages to Debian/experimental happened on 12 January 2006, so exactly 10 years ago.

Packaging

Right from the beginning I used a meta-packaging approach. That is, instead of working directly with the source packages, I wrote (Perl) scripts that generated the source packages from a set of directives. There were several reasons why I choose to introduce this extra layer:

  • The original format of the TeX Live packaging information (tpm) were xml files that were parsed with an XML parser (libxml). I guess (from what I have seen over the years) I was the only one ever properly parsing these .tpm files for packaging.
  • TeX Live packages were often reshuffled, and Debian package name changed, which would have caused a certain level of pain for the creation of original tar files and packaging.
  • Flexibility in creating additional packages and arbitrary dependencies

Till now I am not 100% sure whether it was the best idea, but the scripts remain in place till now, only adapted to the new packaging paradigm in TeX Live (without xml) and adding new functionality. This allows me to just kick off one script that does do all the work, including building .orig.tar.gz, source packages, binary packages.

For those interested to follow the frantic activity during the first years, there is a file CHANGES.packaging which for the years from 2005 to 2011 documents very extensively the changes I made in these years. I don’t want to count the hours the went into all this

Development over the years

TeX Live 2005 was just another TeX system but not the preferred one in Debian Etch and beyond. But then in May 2006, Thomas Esser announced the end of development of teTeX, which cleared the path for TeX Live as main TeX system in Debian (and the world!). The next release of Debian, Lenny (1/2009), already carried only TeX Live. Unfortunately it was only TeX Live 2007 and not 2008, mostly due to me having been involved in rewriting the upstream infrastructure based on Debian package files instead of the notorious xml files. This took quite a lot of attention and time from Debian away to upstream development, but this will be discussed in a different post.

Similarly, the release of TeX Live included in Debian Squeeze (released 2/2011) was only TeX Live 2009 (instead of 2010), but since then (Wheezy and Jessie) the releases of TeX Live in Debian were always the latest released ones.

Current status

Since about 2013 I am trying to keep a regular schedule of new TeX Live packages every month. These helps me to keep up with the changes in upstream packaging and reduces the load of packaging a new release of TeX Live. It also bring to users of unstable and testing a very up-to-date TeX system, where packages at most lack 1 month of behind the TeX Live net updates.

Future

As most of the readers here know, besides caring for TeX (Live) and related packages in Debian, I am also responsible for the TeX Live Manager (tlmgr) and most of upstream’s infrastructure including network distribution. Thus, my (spare, outside work) time needs to be distributed between all these projects (and some others) which leaves less and less time for Debian packaging. Fortunately the packaging is in a state that making regular updates once a month is less of a burden, since most steps are automatized. What is still a bit of a struggle is adapting the binary package (src:texlive-bin) to new releases. But also this has become simpler due to less invasive changes over the years.

All in all, I don’t have many plans for TeX Live in Debian besides keeping the current system running as it is.

Search for and advise to future maintainers and collaborators

I would be more than happy if new collaborators appear, with fresh ideas and some spare time. Unfortunately, my experience over these 10 years with people showing up and proposing changes (anyone remembers the guy proposing a complete rewrite in ML or so?) is that nobody really wants to invest time and energy, but search for quick solutions. This is not something that will work with a package like TeX Live, sized of several gigabyte (the biggest in the Debian archive), and complicated inner workings.

I advise everyone being interested in helping to package TeX Live for Debian, to first install normal TeX Live from TUG, get used to what actions happen during updates (format rebuilds, hyphenation patters, map file updates). One does not need to have a perfect understanding of what exactly happens down there in the guts (I didn’t have in the beginning, either), but if you want to help packaging and never heard about what format dumps or map files are, then this might be a slight obstacle.

Conclusion

TeX Live is the only TeX system in wide use across lots of architectures and operating systems, and the only comparable system, MikTeX, is Windows specific (also there are traces of ports to Unix). Backed by all the big user groups of TeX, TeX Live will remain the prime choice for the foreseeable future, and thus also TeX Live in Debian.

Carl Chenet: Extend your Twitter network with Retweet

12 January, 2016 - 06:00

Retweet is self-hosted app coded in Python 3 allowing to retweet all the statuses from a given Twitter account to another one. Lots of filters can be used to retweet only tweets matching given criterias.

Retweet 0.8 is available on the PyPI repository and is already in the official Debian unstable repository.

Retweet is in production already for Le Journal Du hacker , a French FOSS community website to share and relay news and LinuxJobs.fr , a job board for the French-speaking FOSS community.

The new features of the 0.8 allow Retweet to manage the tweets given how old they are, retweeting only if :

  • they are older than a user-specified duration with the parameter older_than
  • they are younger than a user-specified duration with the parameter younger_than

Retweet is extensively documented, have a look at the official documentation to understand how to install it, configure it and use it.

What about you? does Retweet allow you to develop your Twitter account? Let your comments in this article.


Scott Kitterman: Python3.5 is default python3 in sid

12 January, 2016 - 05:40

As of today, python3 -> python3.5.  There’s a bit of a transition, but fortunately because most extensions are packaged to build for all supported python3 versions, we started this transition at about 80% done.  Thank you do the maintainers that have done that.  It makes these transitions much smoother.

As part of getting ready for this transition, I reviewed all the packages that needed to be rebuilt for this stage of the transition to python3.5 and a few common errors stood out:

  1. For python3 it’s {python3:Depends} not {python:Depends}.
  2. Do not use {python3:Provides}.  This has never been used for python3 (go read the policy if you doubt me [1]).
  3. Almost for sure do not use {python:Provides}.  The only time it should still be used is if some package depends on python2.7-$PACKAGE. It would surprise me if any of these are left in the archive.  If so, since python2.7 is the last python2, then they should be adjusted.  Work with the maintainer of such an rdepend and once it’s removed, then drop the provides.
  4. Do not use XB-Python-Version.  We no longer use this to manage transitions (there won’t be any more python transitions).
  5. Do not use XB-Python3-Version.  This was never used.

Now that we have robust transition trackers [2], the purpose for which XB-Python-Version is obsolete.

In other news, pysupport was recently removed from the archive.  This means that, following the previous removal of pycentral, we finally have one and only one python packaging helper (dh-python) that supports both python and python3.  Thanks to everyone who made that possible.

 

[1] https://www.debian.org/doc/packaging-manuals/python-policy/

[2] https://release.debian.org/transitions/html/python3.5.html

Sven Hoexter: grep | wc -l

12 January, 2016 - 04:53

I did some musings on my way home about a line of shell scripting similar to

if [ `grep foobar somefile |  wc -l` -gt 0 ]; then ...

Yes it's obvious that silencing grep and working with the return code is way more elegant and the backticks are also deprecated, or at least discouraged, nowadays. For this special case "grep -c" is not the right replacement. Just in case.

So I wanted to know how widespread the "grep | wc -l" chaining actually is. codesearch.d.n to the rescue! At least in some codebases it seems to be rather widespread, so maybe "grep -c" is not POSIX compliant? Nope. Traveling back a few years and looking at a somewhat older manpage also lists a "-c" option. At least for now I doubt that this is some kind of backwards compatiblity thing. Even busybox supports it.

As you can obviously deduce from the matching lines, and my rather fuzzy search pattern, there are valid cases among the result set where "grep" is just the first command and some "awk/sed/tr" (you name it) is in between the final "wc -l". But quite some "| wc -l" could be replaced by a "-c" added to the "grep" invocation.

Vincent Fourmond: Ghost in the machine: faint remanence of screen contents across reboots in a Macbook pro retina

12 January, 2016 - 04:31
As was noted a few times before, I happen to own a Macbook Pro Retina laptop I mostly use under Linux. I had noticed from time to time weird mixes between two screens, i.e. I would be looking at a website, but, in some areas with uniform colors, I would see faint traces of other windows currently opened on another screen. These faint traces would not show up in a screenshot. It never really bothered me, and I attributed that to a weird specificity of the mac hardware (they often do that) that was not well handled by the nouveau driver, so I had simply dismissed that. Until, one day, I switch off the computer, switch back on, boot to MacOS and see this as a boot screen:
Here is a close-up view of the top-left corner of the screen:
If you look carefully, you can still see the contents of the page I was browsing just before switching off the computer ! So this problem is not Linux-specific, it also affects MacOS... To be honest, I don't have a clue about what's happening here, but it has to be a serious hardware malfunction. How can two video memory regions be composed upon display without the computer asking explicitly for it ? Why does that problem survives a reboot ? I mean, someone switches on my computer and can see the last thing I did on it ? I could actually read the address line without difficulty, although you'll have to take my word for it, since the picture does not show it that well. That's scary...

Thadeu Lima de Souza Cascardo: GNU on Smartphones (part II)

11 January, 2016 - 21:58

Some time ago, I wrote how to get GNU on a smartphone. This is going to be a long discussion on why and how we should work on more operating systems for more devices.

On Android

So, why should we bother if we already have Android, some might ask? If it's just because of some binary blobs, one could just use Replicant, right?

Well, one of the problems is that Android development is done in hiding, and pushed downstream when a new version is launched. There is no community behind that anyone can join. Replicant ends up either following it or staying behind. It could do a fork and have its own innovations. And I am all for it. But the lack of manpower for supporting devices and keeping up with the latest versions and security fixes already takes most of the time for the one or two developers involved.

Also, Android has a huge modularity problem, that I will discuss further below. But it's hard to replace many components in the system, unless you replace them all. And that also causes the problem that applications can hardly share extra components.

I would rather see Debian running on my devices and supporting good environments and frameworks for all kinds of devices, like phones, tablets, TVs, cars, etc. It's developed by a community I can join, it allows a diverse set of frameworks and environments, and it's much easier to replace single components on such a system.

On Copyleft Hardware

I get it. Hardware is not free or libre. Its design is. I love the concept of copyleft hardware, where one applies the copyleft principles to a hardware design. Also, there is the concept of Hardware that Respects Your Freedom, that is, one that allows you to use it with only free software.

Note that RYF hardware is not necessarily copyleft hardware. In fact, most of the time, the original vendor has not helped at all, and it required reverse engineering efforts by other people to be able to run free software on those systems.

My point here is that we should not prevent people from running free software on hardware that does not RYF or that is not copyleft. We should continue the efforts of reverse engineering and of pushing hardware vendors to not only ship free software for their hardware, but also release their design under free licenses. But in the meantime, we need to make free software flourish in the plethora of devices on the hands of so many people around the world.

On Diversity

I won't go into details on this post about two things. One topic I love is retrocomputing and how Linux supported so many devices, and how many free or quasi-free operating systems ran on so many of them in the past. I could mention ucLinux, Familiar, PDAs, Palmtops, Motorolas, etc. But I will leave it to another time and go from Openmoko and Maemo forward.

The other topic is application scalability. Even Debian with so many software available does not ship all free software there is available. And it doesn't support all third-party services out there. How can we solve that? It has to do with platforms, protocols, open protocols, etc. I will not go into that today.

Because I believe that either way, it's healthy for our society to have diversity. I believe we should have other operating systems available for our devices. Even if application developers will not develop for all of them. That is already the case today. There are other ways to fix that, when that needs fixing. Sometimes, it's sufficient that you can have your own operating system on your device, that you can customize it, enhance it and share it with friends.

And also, that would allow for innovation. It would make possible that some other operating system would have enough market share on some other niche. Other than Android and iOS, for example. But that requires that we can support that operating system on different devices.

And that is the scalability that I want to talk about. How to support more devices with less effort.

Options

But before I go into that, let me write more about the alternatives we have out there. And some of the history around it.

Openmoko

Openmoko developed a set of devices that had some free design. And it has some free operating systems running on top. The community developed a lot of their own as well. Debian could (still can) run on it. There is SHR, which uses FSO, a mobile middleware based on D-Bus.

It even spawned other devices to be developed, like GTA04, a successor board, that can be shipped inside a Neo Freerunner case.

Maemo, Moblin, Meego and Tizen

I remember the announcement of Nokia N770. During FISL in 2005, I even criticized a lot, because it shipped with proprietary drivers. And applications. But it was the first time we heard of Maemo. It was based on Debian and GNOME. The GNOME Mobile initiative was born from that, I believe, but died later on.

But with the launch of N900, and later events, like the merge of Moblin with Maemo, to create Meego, we all had an operating system that was based on community developed components, that had some community participation, and was much more like the systems we were used to. You could run gcc on N900. You could install Das U-Boot and have other operating systems running there.

But Nokia has gone a different path. Intel has started Tizen with Samsung. There is so much legacy there, that could still be developed upon. I am just sad that Jolla decided to put proprietary layers on top of that, to create SailfishOS.

But we still have Mer, Nemo. But it looks like Cordia is gone. At least, (http://cordiahd.org) seems to have been registered by a domain seller.

Not to forget, Neo900, a project to upgrade the board, in the same vein as GTA04.

FirefoxOS and Ubuntu

What can I say about FirefoxOS and Ubuntu Phone? In summary, I think we need more than Web, and Canonical has a history of not being too community oriented as we'd like.

I won't go too much here in what I think about the Web as a platform. But I think we need a system that has more options for platforms. I haven't participated in projects directed by Mozilla either, so I can't say much about that.

Ubuntu Phone should be a system more like what we are used to. But Canonical is going to set the directions to the project, it's not a community project.

Nonetheless, I think they add to the diversity, and users should be able to try them, or their free parts or versions. But there are challenges there, that I think need to be discussed.

So, both of these systems are based on Android. They don't use Dalvik or the application framework. But they use what is below that, which means device drivers in userspaces, like RIL, for the modem drivers, and EGL, for graphics drivers. The reason they do that is to build on top of all the existing vendor support for Android. If a SOC and phone vendor already supports Android, there should be not much needed to do to support FirefoxOS or Ubuntu Phone.

In practice, this has not benefited the users or the projects. Porting should be as simple as getting a generic FirefoxOS or Ubuntu Phone image and run it on top of your Android or CyanogenMod port.

Porting usually requires doing all the same work as porting Android. Even though one should be able to take advantage of existing ports, it still requires doing a lot of integration work and building an entire image, most of the time, including the upper layers, that should be equal in all devices. This process should require at most creating a new image based on other images and loading it on the device. I will discuss more about this below.

I can't forget to mention the Matchstick TV project. It is based on FirefoxOS. I think it would have much more changes to succeed if it was easier for testers to have images available for all of their gadgets capable of running some form of Android.

Replicant

And then we have Replicant. It has a lot of the problems Android has. Even so, it's a tremendously important project. First, it offers a free version of Android, removing all proprietary pieces. This is very good for the community. But more than that, it tries to free the devices beyond that.

What the project has been done is reverse engineer some of the pieces, mostly the modem drivers. That allows other projects to build upon that work, and support those modems. Without that, there is no telephony or celullar data available on any of these devices. Not without proprietary software, that is.

They also have been working on free bootloaders, which is another important step for a completely free system.

Next steps

There are many challenges here, but I believe we should work on a set of steps that make it more palatable and produce intermediary results that can be used by users and other projects.

One of the goals should be good modularity. The ability to replace some pieces with the least trouble necessary. The update of a single framework library should be just a package install, instead of building the entire framework again. If there is ABI breakage, users should still be able to have access to binaries (and accompanying source code) and only need to update the library and its reverse dependencies. If one layer does not have these properties, at least it should be possible to update this big chunk without interfering with the other layers.

One example is FirefoxOS and Ubuntu Phone. Even if there is a big image with all system userspace and middleware, the porting, install and upgrade process should allow the user to keep the applications and to leverage porting already done by similar projects. Heck, these two projects should be able to share porting efforts without any trouble.

So what follows is a quick discussion on Android builds, then suggestions on how to approach some of the challenges.

On Android build

The big problem with Android build is its non-modularity. You need to build the world and more to get a single image in the end. Instead of packages that could be built independently, that generated modular package images, that could be bundled in a single system image. Better yet would be the ability to pick only those components that matter.

Certainly, portability would be just as interesting, being able to build certain components on top a GNU system with GNU libc.

At times, it seems like this is done by design, to make it difficult for "fragmentation" and competition. Basically, making it difficult to exercise the freedom to modify the software and share your modifications with others.

Imagine the possibilities of being able to:

  • Build Dalvik and the framework on a GNU system in order to run Android programs on GNU systems.
  • Or build a single framework library that would be useful for your Java project.
  • Build only cutils, because you want to use that on your project.
  • Build the HAL and be able to use Android drivers on your system.
  • Build only RIL and use the modem drivers with ofono.

There are some dangers in promoting the use of Android like this. Since it promotes proprietary drivers, relying on such layers instead of better community oriented layers means giving an advantage to the first. So, the best plan would be to replace those layers, when they are used, for things like ofono and Wayland, for example.

But, in the meantime, making it easier for users to experiment with FirefoxOS, Ubuntu Phone, Matchstick TV, Replicant on other devices, without resorting to building the entire Android system, would be a very nice thing.

It is possible that there are some challenges with respect to API and ABI. That these layers are too intermingled that some changes present in one port would prevent FirefoxOS to run on top of it without changes in either of the layers. I can't confirm that is the case, but can't deny the possibility.

Rooting

One of the challenges we have that may have trouble in scaling is rooting devices.

Unfortunately, most of the devices are owned by the vendor or carrier, not the user. The user is prevented from replacing software components in the system, by removing its permissions from replacing most of the files in the filesystem, or writing to block devices, or change boot parameters or write to the storage devices.

Fortunately, there are many documents, guides, howtos, and whatnots out there instructing on how to root a lot of devices. In some cases, it depends on software bugs that can be patched by the users as instructed by vendors.

Certainly, favoring devices that are built to allow rooting, hacking, etc, is a good thing. But we should still help those users out there who do not have such devices, and allow them to run free software on them.

Booting

Then, comes booting, which is a large discussion on its own, and also related to rooting, or how to allow users to replace their software.

First, we have the topic of bootloaders, which are usually not free, and embedded in the chip, not on external storage. So, there are those pieces of the bootloader which we can replace more easily and those that we are not, because they would require changing a piece of ROM, for example.

Das U-Boot would be one of the preferred options for a system. It supports a lot of hardware platforms out there, a lot of storage devices and filesystems, network booting, USB and serial booting, and a lot of other things. Porting is not an easy task, but it is possible.

Meanwhile, using the bootloaders that ship with the system, when possible, would allow a path where other pieces of the system would be free.

One of the challenges here is building single images that could boot everywhere. The lack of a single ARM boot environment is a curse and a blessing. It makes it hard to have this single boot image, but, on the other hand, it has allowed so much of the information for such systems to be free, to be embedded in copyleft software, instead of having blobs executed by our free systems, like ACPI encourages.

Device trees have pushed this portability issue from the kernel to the bootloaders. Possibily encouraging vendors to now hide and copyright this information in the form of proprietary device trees. But it has made it easier for single images.

In practice, we still need to see a lot of devices out there supporting this single kernel scenario. And this mechanism only brings us the possibility of a single kernel. We still need to ship bootable images that have pieces of bootloaders that are not as portable.

This has caused lots of operating systems out there to be built for a single board. Or to support just a small set of boards. I struggle with the concept. Why are we not able to mix device-specific pieces with generic pieces and get a resulting image that will boot on our choice of board? Why does every project need to do all the porting work again, repeating the efforts? Or have one entier ISO image for every supported board? Check how Geexbox does it, for an example.

Fortunately, I see Debian going in a good direction. Here, one can see how it instructs the user to combine a system-dependent part with a system-independent part.

Installation

Which brings us to the installation challenge. We should make this easy and also customizable by the user. Every project might have its own method, and that is part of the diversity that we should allow.

The great challenge here is handling rooting and booting for the user. But it's also possible to leave that as separate efforts, as it would be nice to have good installers as we have in the desktop and server environments.

Linux

Linux is one of the most ported kernels out there. Of course, it would be healthy to the diversity I propose that other kernels and systems out there should work on those mobile devices. NetBSD, for example, has a reputation of being very portable and ported to many platforms. And GNU runs on many of them, either together with the original systems, or as the main system on top of other kernels, as proven by the Debian GNU/kFreeBSD port, which uses GNU libc.

But, though I would love to see HURD running directly on those devices, Linux and its derivatives are already running on them. Easier to tackle is to get Linux-libre running on those systems.

But though this looks like one of the easier tasks at hand, it is not. If you run a derivative version from Linux, provided by the vendor, things should go smooth. But most of the time, the vendor does not provide patches upstream, and leave their fork to rot, if you are lucky, in the earlier 3.x series.

And some times, there is not even source code available. There are vendors out there who do not respect the GPL and that is one of the reasons why GPL enforcement is important. I take this opportunity to call attention to the work of Software Freedom Conservancy and its current efforts to raise funds to continue their work.

Running an existing binary version of Linux on your device with a free userspace is part of the strategy of replacing pieces one at a time while allowing for good diversity and reaching more users.

Drivers

Then we have the matter of drivers. And in this case, it's not only Linux drivers, but drivers running in userspace. Though others exist, there are two important and common cases, which are modem and graphics drivers, both of which are usually provided by Android frameworks, and that other systems try to leverage, instead of using alternatives that are more community friendly.

In the case of modem drivers, Android uses its RIL. Besides not using a Radio or Modem at all, there are two strategies for free versions. One is using free versions of the drivers for RIL. That's what Replicant does, because, well, it uses the other Android subsystems after all. The other one is using a system that is developed by the community. ofono is one that, even though it's an Intel iniative, looks more community governed or open to community participation than any Android subsystem ever was.

As for Graphics, Canonical even built its Mir project with the goal of being able to use Android GPU drivers. Luckily, there has been a lot of reverse engineering efforts out there for a lot of those GPUs shipped with ARM SoCs.

Then, we can use Mesa, Wayland and X.org. Other option, until then, is just using a framebuffer driver, possibly one that does not need any initialization, depending on one done by the bootloader, for example.

Middleware and Shells

Plasma Active, that I just found out now is Plasma Mobile, looks like a great system. Built by folks behind KDE, we can expect great freedom to come from it. Unfortunately, it suffers from what we have been discussing here, which is lack of good device support, or shipping images that run on top of single devices, without leveraging the porting that has come before of other systems. Fortunately, that is just what I want to accomplish with this effort.

FSO, that I mentioned before, is a good middleware, that we should try to run on top of these devices. Running GNOME or Unity shell, and use rpm or deb based systems, that's all part of the plan on diversity. You could use systemd, or System V init systems, whatever gives you the kicks.

It's not an easy task, since there are so many new things on these new kinds of devices. Besides touch, there is telephony, which as I mentioned, would be a good task for ofono. As for TV sets or dongles, I would love to see OpenFlint, created by the Matchstick TV guys, flourish out there and allow me to flint stuff from my mobile phone running Debian into my TV dongle running Debian.

Project

So, is there a project out there you can start contributing to? Well, I pointed out a lot of them. All of them make part of this plan. Contributing to reverse engineering GPUs, or to Plasma Mobile, ofono, GNOME, Linux-libre, bootloaders, Debian, and so many other projects.

My plans are to contribute in the scope of Debian and make sure it works well on top of semi-liberated devices, and make sure there is a nice user interface and good applications when using either GNOME or Plasma. Integrating ofono is a good next step, but I am running ahead of myself.

Right now, I don't think there is need for an integrated effort. If you think otherwise, please let me know. If you are doing something in this direction, I would love to know.

Paul Wise reminded me to point out to the Debian Mobile, where I will probably contribute any documentation and other relevant results.

Thanks

There are some many people to thank, but I would like to remember Ian Murdock, who created Debian, one of the projects who inspires me the most. I think the best way to handle his passing is to celebrate what he has helped create, and move forward, taking more free software to more people in an easy and modular way.

I have been wanting to write something like this for a long time, so I thank Chris Webber for inspiring me on doing it.

Arturo Borrero González: Great Debian meeting!

11 January, 2016 - 17:19


Last week we finally ended with a proper Debian informal meeting at Seville.

A total amount of 9 people attended, 3 of them DDs (Aurelien Jarno, Gillem Jover, Ana Guerrero) and 1 DM (me).

The meeting started with the usual "personal references" round, and then topics ranged from how to get more people involved with Debian, to GSoC-like programs discussions, and some Debian anecdotes as well.

There were also talks about when and how future meetings should be.

This meeting was hosted by http://www.plan4d.eu/, thanks to Pablo Neira (Netfilter project head).

Some pics of the moment:



Alessio Treglia: Filling old bottles with new wine

11 January, 2016 - 15:29

 

They are filling old bottles with new wine!” This is what the physicist Werner Heisenberg heard exclaiming by his friend and colleague Wolfgang Pauli who, criticizing the approach of the scientists of the time, believed that they had been forcibly glued the notion of “quantum” on the old theory of the planetary-model of Bohr’s atom. Faced with the huge questions introduced by quantum physics, Pauli instead began to observe the new findings from a different point of view, from a new level of reality without the constraints imposed by previous theories.

Newton himself, once he theorized the law of the gravitational field, failing to place it in any of the physical realities of the time, he merely…

<Read More...>

Daniel Pocock: FOSDEM RTC Dev-room schedule published

11 January, 2016 - 13:09

If you want to help make Free Real-time Communication (RTC) with free, open source software surpass proprietary solutions this year, a great place to start is the FOSDEM RTC dev-room.

On Friday we published the list of 17 talks accepted in the dev-room (times are still provisional until the FOSDEM schedule is printed). They include a range of topics, including SIP, XMPP, WebRTC and peer-to-peer Real-time communication.

RTC will be very prominent at FOSDEM this year with several talks on this topic, including my own, in the main track.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้