Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 month 20 hours ago

Erich Schubert: Year 2014 in Review as Seen by a Trend Detection System

17 January, 2015 - 00:22
We ran our trend detection tool Signi-Trend (published at KDD 2014) on news articles collected for the year 2014. We removed the category of financial news, which is overrepresented in the data set. Below are the (described) results, from the top 50 trends (I will push the raw result to appspot if possible due to file limits). The top 10 trends are highlighted in bold. January 2014-01-29: Obama's State of the Union address February 2014-02-05..23: Sochi Olympics (11x, including the four below) 2014-02-07: Gay rights protesters arrested at Sochi Olympics 2014-02-08: Sochi Olympics begins 2014-02-16: Injuries in Sochi Extreme Park 2014-02-17: Men's Snowboard cross finals called of because of fog 2014-02-19: Violence in Ukraine and Kiev 2014-02-22: Yanukovich leaves Kiev 2014-02-23: Sochi Olympics close 2014-02-28: Crimea crisis begins March 2014-03-01..06: Crimea crisis escalates futher (3x) 2014-03-08: Malaysia Airlines machine missing in South China Sea (2x) 2014-03-18: Crimea now considered part of Russia by Putin 2014-03-28: U.N. condemns Crimea's secession April 2014-04-17..18: Russia-Ukraine crisis continues (3x) 2014-04-20: South Korea ferry accident May 2014-05-18: Cannes film festival 2014-05-25: EU elections June 2014-06-13: Islamic state fighting in Iraq 2014-06-16: U.S. talks to Iran about Iraq July 2014-07-17..19: Malaysian airline shot down over Ukraine (3x) 2014-07-20: Israel shelling Gaza kills 40+ in a day August 2014-08-07: Russia bans EU food imports 2014-08-20: Obama orders U.S. air strikes in Iraq against IS 2014-08-30: EU increases sanctions against Russia September 2014-09-04: NATO summit 2014-09-23: Obama orders more U.S. air strikes against IS Oktober 2014-10-16: Ebola case in Dallas 2014-10-24: Ebola patient in New York is stable November 2014-11-02: Elections: Romania, and U.S. rampup 2014-11-05: U.S. Senate elections 2014-11-25: Ferguson prosecution Dezember 2014-12-08: IOC Olympics sport additions 2014-12-11: CIA prisoner center in Thailand 2014-12-15: Sydney cafe hostage siege 2014-12-17: U.S. and Cuba relations improve unexpectedly 2014-12-19: North Korea blamed for Sony cyber attack 2014-12-28: AirAsia flight 8501 missing

Richard Hartmann: Release Critical Bug report for Week 03

16 January, 2015 - 23:21

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1100 (Including 178 bugs affecting key packages)
    • Affecting Jessie: 172 (key packages: 104) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 128 (key packages: 80) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 19 bugs are tagged 'patch'. (key packages: 10) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 8 bugs are marked as done, but still affect unstable. (key packages: 5) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 101 bugs are neither tagged patch, nor marked done. (key packages: 65) Help make a first step towards resolution!
      • Affecting Jessie only: 44 (key packages: 24) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 18 bugs are in packages that are unblocked by the release team. (key packages: 7)
        • 26 bugs are in packages that are not unblocked. (key packages: 17)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 5 2 (0+2) 224 (132+92) 6 release! 212 (129+83) 7 release+1 194 (128+66) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

EvolvisForge blog: Debian/m68k hacking weekend commencing soonish

16 January, 2015 - 21:26

As I said, I did not certain events that begun with “lea” and end with “ing” prevent me from organising a Debian/m68k hack weekend. Well, that weekend is now.

I’m too unorganised, and I spent too much time in the last few evenings to organise things so I built up a sleep deficit already ☹ and the feedback was slow. (But so are the computers.) And someone I’d have loved to come was hurt and can’t come.

On the plus side, several people I’ve long wanted to meet IRL are coming, either already today or tomorrow. I hope we all will have a lot of fun.

Legal disclaimer: “Debian/m68k” is a port of Debian™ to m68k. It used to be official, but now isn’t. It belongs to debian-ports.org, which may run on DSA hardware, but is not acknowledged by Debian at large, unfortunately. Debian is a registered trademark owned by Software in the Public Interest, Inc.

Rhonda D'Vine: Polygons

16 January, 2015 - 20:44

I stumbled upon this site thanks to Helga: Parable of the Polygons. On the site you can interactively find out how harmless choices can make a harmful world. I found it quite eye opening. And what most catched me but isn't part of the site is that only unhappy polygons are willing to move. Those who are just ok with their neighbourhood but not really happy about it aren't willing to move. Which made me try it out in my own way: Trying to create the most diverse possible environment by temporarily making as many polygons unhappy to find out if it's possible to make as many polygons happy in the long run as possible.

... which is actually part of the way I see my own life. I always sort-of tried to confront people to think. I mean, it's not that common that you see a by-the-looks male person wearing a skirt. And ... since I moved out in July into a small intermediate flat and thus a new neighbourhood, I found the confidence (in parts also to be attributed to the confidence built up at these fine feministic conferences) to walk my hometown in a skirt. Only on some few occations, when meeting up with friends, mostly at evening/night, but it was always a nice experience. And I only felt once uncomfortable to be honest, when there was a probably right-winged skinhead at the subway station. Too many other people around, so I tried to avoid eye contact, but it didn't feel good.

Diversity is something that society needs. In all aspects. Also within the Debian project. I believe strongly in that there can't be much of innovation and moving forward if all people do think the same direction. That only means that potential alternative paths won't even get considered, and potentially get lost. That's one of the core parts of what makes the Free Software community livid and useful. People try different approaches, and in the end there will be adopters of what they believe is the better project. Projects pop up every now and then, others starve because of loss of interest, users not picking it up, developers spending their time on other stuff, and that's absolutely fine too. There is always something to be learned even from those situations.

Speaking of diversity, there is this protest going on later today because the boss of a cafe here in Vienna considered it a good idea to kick out a lesbian couple because they kissed each other for greeting and told them that they don't have a place for their "otherness" in her traditional viennese cafe and they rather should take it to a brothel. She excused yesterday for her tone that she used, she said she should have been more relaxed—as the CEO of that cafe. Which literally means that she only exused for the tone she used in her role, but not at all for the message she transported. So meh, hope there will be many people at the protest. Yes, there is some anti discrimination law around, but that only covers the workplace, and not service areas. Welcome to Austria.
On the upside, court striked down ban on same-sex couple adoption just the other day. Hopefully there is still hope for this country. :)

/personal | permanent link | Comments: 0 | Flattr this

Steinar H. Gunderson: Locate

16 January, 2015 - 18:54
cassarossa:~> time locate asdkfjhasekjrxhw
locate asdkfjhasekjrxhw  19,49s user 0,46s system 82% cpu 24,071 total

It's 2015. locate still works by a linear scan through a flat file.

Raphaël Hertzog: Freexian’s fitfth report about Debian Long Term Support

16 January, 2015 - 16:41

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December 46 work hours have been equally split among 4 paid contributors (note that Thorsten and Raphaël have actually spent more hours because they took over some hours that Holger did not do over the former months). Their reports are available:

Evolution of the situation

Compared to last month, the number of paid work hours has almost not increased (we are at 48 hours per month). We still have a couple of new sponsors in the pipe but with the new year they did not complete the process yet. Hopefully next month will see a noticeable increase.

As usual, we are looking for more sponsors to reach our our minimal goal of funding the equivalent of a half-time position. Those of you who are struggling to spend money in the last quarter due to budget overrun, now is a good time to see if you want to include Debian LTS support in your 2015 budget!

In terms of security updates waiting to be handled, the situation looks similar to last month: the dla-needed.txt file lists 30 packages awaiting an update (3 more than last month), the list of open vulnerabilities in Squeeze shows about 56 affected packages in total. We do not manage to clear the backlog but it’s not getting significantly worse either.

Thanks to our sponsors

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Junichi Uekawa: Opposite to strong typing.

16 January, 2015 - 04:18
Opposite to strong typing. Maybe weak typing is too discriminatory, we should call it typing challenged. Like: sqlite is a typing challenged language.

Daniel Pocock: Disk expansion

16 January, 2015 - 03:29

A persistent problem that I encounter with hard disks is the capacity limit. If only hard disks could expand like the Tardis.

My current setup at home involves a HP Microserver. It has four drive bays carrying two SSDs (for home directories) and two Western Digital RE4 2TB drives for bulk data storage (photos, source tarballs and other things that don't change often). Each pair of drives is mirrored. I chose the RE4 because I use RAID1 and they offer good performance and error recovery control which is useful in any RAID scenario.

When I put in the 2TB drives, I created a 1TB partition on each for Linux md RAID1 and another 1TB partition on each for BtrFs.

Later I added the SSDs and I chose BtrFs again as it had been working well for me.

Where to from here?

Since getting a 36 megapixel DSLR that produces 100MB raw images and 20MB JPEGs I've been filling up that 2TB faster than I could have ever imagined.

I've also noticed that vendors are offering much bigger NAS and archive disks so I'm tempted to upgrade.

First I looked at the Seagate Archive 8TB drives. 2TB bigger than the nearest competition. Discussion on Reddit suggests they don't have Error Recovery Control / TLER however and that leaves me feeling they are not the right solution for me.

Then I had a look at WD Red. Slightly less performant than the RE4 drives I run now, but with the possibility of 6TB per drive and a little cheaper. Apparently they have TLER though, just like the RE4 and other enterprise drives.

Will 6 or 8TB create new problems?

This all leaves me scratching my head and wondering about a couple of things though:

  • Will I run into trouble with the firmware in my HP Microserver if I try to use such a big disk?
  • Should I run the whole thing with BtrFs and how well will it work at this scale?
  • Should I avoid the WD Red and stick with RE4 or similar drives from Seagate or elsehwere?

If anybody can share any feedback it would be really welcome.

Michal Čihař: Weblate UI polishing

16 January, 2015 - 00:00

After releasing Weblate 2.0 with Bootstrap based UI, there was still lot of things to improve. Weblate 2.1 brought more consistency in using buttons with colors and icons. Weblate 2.2 will bring some improvements in other graphics elements.

One of thing which was for quite long in our issue tracker is to provide own renderer for SVG status badge. So far Weblate has offered either PNG badge or external SVG rendered by shields.io. Relying on external service was not good in a long term and also caused requests to third party server on many pages, what could be considered bad privacy wise.

Since this week, Weblate can render SVG badge on it's own and they are also matching current style used by other services (eg. Travis CI):

One last thing which really did not fit into new UI were activity charts. In past they were rendered as PNG on server side, but for upcoming releases we have switched to use Chartist javascript library and render them as SVG on client side. This way we can nicely style them to fit into page, they scale properly and also reduce server load. You can see them in action on Hosted Weblate server:

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

Noah Meyerhans: Spamassassin updates

15 January, 2015 - 22:44

If you're running Spamassassin on Debian or Ubuntu, have you enabled automatic rule updates? If not, why not? If possible, you should enable this feature. It should be as simple as setting "CRON=1" in /etc/default/spamassassin. If you choose not to enable this feature, I'd really like to hear why. In particular, I'm thinking about changing the default behavior of the Spamassassin packages such that automatic rule updates are enabled, and I'd like to know if (and why) anybody opposes this.

Spamassassin hasn't been providing rules as part of the upstream package for some time. In Debian, we include a snapshot of the ruleset from an essentially arbitrary point in time in our packages. We do this so Spamassassin will work "out of the box" on Debian systems. People who install spamassassin from source must download rules using spamassassin's updates channel. The typical way to use this service is to use cron or something similar to periodically check for rule changes via this service. This allows the anti-spam community to quickly adapt to changes in spammer tactics, and for you to actually benefit from their work by taking advantage of their newer, presumably more accurate, rules. It also allows for quick reaction to issues such as the one described in bug 738872 and 774768.

If we do change the default, there are a couple of possible approaches we could take. The simplest would be to simply change the default value of the CRON variable in /etc/default/spamassassin. Perhaps a cleaner approach would be to provide a "spamassassin-autoupdates" package that would simply provide the cron job and a simple wrapper program to perform the updates. The Spamassassin package would then specify a Recommends relationship with this package, thus providing the default enabled behavior while still providing a clear and simple mechanism to disable it.

Lunar: 80%

15 January, 2015 - 21:43

Unfortunately I could not go on stage at the 31st Chaos Communication Congress to present reproducible builds in Debian alongside Mike Perry from the Tor Project and Seth Schoen from the Electronic Frontier Foundation. I've tried to make it up for it, though… and we have made amazing progress.

Wiki reorganization

What was a massive and frightening wiki page now looks really more welcoming:

Depending on what one is looking for, it should be much easier to find. There's now a high-level status overview given on the landing page, maintainers can learn how to make their packages reproducible, enthusiasts can more easily find what can help the project, and we have even started writing some history.

.buildinfo for all packages

New year's eve saw me hacking Perl to write dpkg-genbuildinfo. Similar to dpkg-genchanges, it's run by dpkg-buildpackage to produce .buildinfo control files. This is where the build environment, and hash of source and binary packages are recorded. This script, integrated with dpkg, replace the previous debhelper interim solution written by Niko Tyni.

We used to fix mtimes in control.tar and data.tar using a specific addition to debhelper named dh_fixmtimes. To better support the ALWAYS_EXCLUDE environment variable and for pragramtic reasons, we moved the process in dh_builddeb.

Both changes were quickly pushed to our continuous integration platform. Before, only packages using dh would create a .buildinfo and thus eventually be considered reproducible. With these modifications, many more packages had their chance… and this shows:

Yes, with our experimental toolchain we are now at more than eighty percent! That's more than 17200 source packages!

srebuild

Another big item on the todo-list was crossed over by Johannes Schauer. srebuild is a wrapper around sbuild:

Given a .buildinfo file, it first finds a timestamp of Debian Sid from snapshot.debian.org which contains the requested packages in their exact versions. It then runs sbuild with the right architecture as given by the .buildinfo file and the right base system to upgrade from, as given by the version of the base-files package version in the .buildinfo file. Using two hooks it will install the right package versions and verify that the installed packages are in the right version before the build starts.

Understanding problems

Over 1700 packages have now been reviewed to understand why build results could not be reproduced on our experimental platform. The variations between the two builds are currently limited to time and file ordering, but this still has uncovered many problems. There are still toolchain fixes to be made (more than 180 packages for the PHP registry) which can make many packages reproducible at once, but others like C pre-processor macros will require many individual changes.

debbindiff, the main tool used to understand differences, has gained support for .udeb, TrueType and OpenType fonts, PNG and PDF files. It's less likely to crash on problems with encoding or external tool. But most importantly for large package, it has been made a lot faster, thanks to Reiner Herrmann and Helmut Grohne. Helmut has also been able to spot cross-compilation issues by using debbindiff!

Targeting our efforts

It gives warm fuzzy feelings to hit the 80% mark, but it would be a bit irrelevant if this would not concern packages that matter. Thankfully, Holger worked on producing statistics for more specific package sets. Mattia Rizzolo has also done great work to improve the scripts generating the various pages visible on reproducible.debian.net.

All essential and build-esential packages, except gcc and bash, are considered reproducible or have patches ready. After some lengthy builds, I also managed to come up with a patch to make linux build reproducibly.

Miscellaneous

After my initial attempt to modify r-base to remove a timestamp in R packages, Dirk Eddelbuettel discussed the issue with upstream and came up with a better patch. The latter has already been merged upstream!

Dirk's solution is to allow timestamps to be set using an external environment variable. This is also how I modified FontForge to make it possible to reproduce fonts.

Identifiers generated by xsltproc have also been an issue. After reviewing my initial patch, Andrew Awyer came up with a much nicer solution. Its potential performance implications need to be evaluated before submission, though.

Chris West has been working on packages built with Maven amongst other things.

PDF generated by GhostScript, another painful source of troubles, is being worked on by Peter De Wachter.

Holger got X.509 certificates signed by the CA cartel for jenkins.debian.net and reproducible.debian.net. No more scary security messages now. Let's hope next year we will be able to get certificates through Let's Encrypt!

Let's make a difference together

As you can imagine with all that happened in the past weeks, the #debian-reproducible IRC channel has been a cool place to hang out. It's very energizing to get together and share contributions, exchange tips and discuss hardest points. Mandatory quote:

* h01ger is very happy to see again and again how this is a nice
         learning circle...! i've learned a whole lot here too... in
         just 3 months... and its going on...!

Reproducible builds are not going to change anything for most of our users. They simply don't care how they get software on their computer. But they care to get the right software without having to worry about it. That's our responsibility, as developpers. Enabling users to trust their software is important and a major contribution, we as Debian, can make to the wider free software movement. Once Jessie is released, we should make a collective effort to make reproducible builds an highlight of our next release.

Tim Retout: Docker London Meetup - January 2015

15 January, 2015 - 14:45

Last week, I visited London for the January Docker meetup, which was the first time I'd attended this group.

It was a talk-oriented format, with around 200 attendees packed into Shoreditch Village Hall; free pizza and beer was provided thanks to the sponsors, which was awesome (and makes logistics easier when you're travelling there from work).

There were three talks.

First, Andrew Martin from British Gas spoke about how they use Docker for testing and continuous deployment of their Node.js microservices - buzzword bingo! But it's helpful to see how companies approach these things.

Second, Johan Euphrosine from Google gave a short demo of Google Cloud Platform for running Docker containers (mostly around Container Engine, but also briefly App Engine). This was relevant to my interests, but I'd already seen this sort of talk online.

Third, Dan Williams presented his holiday photos featuring a journey on a container ship, which wins points from me for liberal interpretation of the meetup topic, and was genuinely very entertaining/interesting - I just regret having to leave to catch a train halfway through.

In summary, this was worth attending, but as someone just getting started with containers I'd love some sort of smaller meetings with opportunities for interaction/activity. There's such a variety of people/use cases for Docker that I'm not sure how much everyone had in common with each other; it would be interesting to find out.

Mark Brown: Kernel build times for automated builders

15 January, 2015 - 05:37

Over the past year or so various people have been automating kernel builds with the aim of both setting the standard that things should build reliably and using the resulting builds for automated testing. This has been having good results, it’s especially nice to compare the results for older stable kernel builds with current ones and notice how much happier everything is.

One of the challenges with doing this is that for good coverage you really need to include allmodconfig or allyesconfig builds to ensure coverage of as much kernel code as possible but that’s fairly resource intensive given the size of the kernel, especially when you want to cover several architectures. It’s also fairly important to get prompt results, development trees are changing all the time and the longer the gap between a problem appearing and it being identified the more likely the report is to be redundant.

Since I was looking at my own setup and I know of several people who’ve done similar benchmarking I thought I’d publish some ballpark numbers for from scratch allmodconfig builds on a single architecture:

i7-4770 with SSD 20 minutes linode 2048 1.25 hours EC2 m3.medium 1.5 hours EC2 c3.medium 2 hours Cubietruck with SSD 20 hours

All with the number of tasks spawned by make set to the number of execution threads the system has and no speedups from anything like ccache. I may keep this updated in future with further results.

Obviously there’s tradeoffs beyond the time, especially for someone like me doing this at home with their own resources – my desktop is substantially faster than anything else I’ve tried but I’m also using it interactively for my work, it’s not easily accessible when not at home and the fans spin up during builds while EC2 starts to cost noticeable money to use as you add more builds.

Ritesh Raj Sarraf: Apport in Debian

14 January, 2015 - 17:47

Looking at the PTS entries, I realized that it has been more than 2 yrs, since I pushed the first Apport packages into Debian.

We have talked about it in the past, and do not see a direct need for apport yet. That is one reason why it still resides (and will continue to) in Experimental.

Even though not used as a bug reporting tool, Apport can still be a great tool for (end) users to detect crashes. It can also be used to find further details about program crashes and pointers to look further.

This post is a call for help if there is anybody, who'd be interested to work on maintaining Apport in Debian. Most work include maintaining new upstream releases, and porting the Debian CrashDB to newer versions, as and when necessary.

As said above, it is not going to be a bug reporting tool, but rather a bug monitoring tool.

Categories: Keywords: 

Ritesh Raj Sarraf: apt-offline 1.6

14 January, 2015 - 17:32

I am pleased to announce the release of apt-offline - 1.6

This release is mostly a bug fix release, which every user should upgrade to. It also fixes a major bug in the way we limited the validation of GPG integrity, for the APT repository lists (Thank you Paul Wise).

Also, In the last release,  we migrated from custom magic library to the python shipped ctype python-magic library. That allowed some bugs to creep, and hopefully now, all those bugs should be fixed. A big thanks to Roland Summers for his bug reports and continuous feedback.

What is apt-offline ?

Description-en: offline APT package manager
 apt-offline is an Offline APT Package Manager.
 .
 apt-offline can fully update and upgrade an APT based distribution without
 connecting to the network, all of it transparent to APT.
 .
 apt-offline can be used to generate a signature on a machine (with no network).
 This signature contains all download information required for the APT database
 system. This signature file can be used on another machine connected to the
 internet (which need not be a Debian box and can even be running windows) to
 download the updates.
 The downloaded data will contain all updates in a format understood by APT and
 this data can be used by apt-offline to update the non-networked machine.
 .
 apt-offline can also fetch bug reports and make them available offline.

Debian changelog for the 1.6 release.

apt-offline (1.6) experimental; urgency=medium

  * [2a4a7f1] Don't abuse exception handlers.
    Thanks to R-Sommer
  * [afc51b3] MIME type for a deb package.
    Thanks to R-Sommer
  * [ec2d539] Also include debian-archive-keyring.
    Thanks to Hans-Christoph Steiner (Closes: #748082)
  * [dc602ac] Update MIME type for .gpg
  * [c4f9b71] Cycle through possible apt keyrings.
    Thanks to Paul Wise (Closes: #747163)
  * [de0fe4d] Clarify manpage for install
  * [b5e1075] Update manpage with some doc about argparse positional
    values to arguments
  * [c22d64d] Port is data type integer.
    Thanks to Roland Sommer
  * [67edebe] autodetect release name
  * [5803141] Disable python-apt support

 -- Ritesh Raj Sarraf <rrs@debian.org>  Wed, 14 Jan 2015 15:34:45 +0530

[1] https://alioth.debian.org/projects/apt-offline/

Categories: Keywords: 

Charles Plessy: nodejs-legacy

14 January, 2015 - 15:42

apt install nodejs-legacy if you want npm install to work.

Simon Josefsson: Replicant 4.2 0003 on I9300

14 January, 2015 - 06:17

The Replicant project released version 4.2 0003 recently. I have been using Replicant on a Samsung SIII (I9300) for around 14 months now. Since I have blogged about issues with NFC and Wifi earlier, I wanted to give a status update after upgrading to 0003. I’m happy to report that my NFC issue has been resolved in 0003 (the way I suggested; reverting the patch). My issues with Wifi has been improved in 0003, with my merge request being accepted. What follows below is a standalone explanation of what works and what doesn’t, as a superset of similar things discussed in my earlier blog posts.

What works out of the box: Audio, Telephony, SMS, Data (GSM/3G), Back Camera, NFC. 2D Graphics is somewhat slow compared to stock ROM, but I’m using it daily and can live with that so it isn’t too onerus. Stability is fine, similar to other Android device I’m used to. Video playback does not work (due to non-free media decoders?), which is not a serious problem for me but still likely the biggest outstanding issue except for freedom concerns. 3D graphics apparently doesn’t work, and I believe it is what prevents Firefox from working properly (it crashes). I’m having one annoying but strange problem with telephony: when calling one person I get scrambled audio around 75% of the time. I can still hear what the other person is saying, but can barely make anything out of it. This only happens over 3G, so my workaround when calling that person is to switch to 2G before and switch back after. I talk with plenty other people, and have never had this problem with anyone else, and it has never happened when she talks with anyone else but me. If anyone has suggestion on how to debug this, I’m all ears.

Important apps to get through daily life for me includes K9Mail (email), DAVDroid (for ownCloud CalDav/CardDAV), CalDav Sync Adapter (for Google Calendars), Conversations (XMPP/Jabber chat), FDroid (for apps), ownCloud (auto-uploading my photos), SMS Backup+, Xabber (different XMPP/Jabber accounts), Yubico Authenticator, MuPDF and oandbackup. A couple of other apps I find useful are AdAway (remove web ads), AndStatus, Calendar Widget, NewsBlur and ownCloud News Reader (RSS readers), Tinfoil for Facebook, Twidere (I find its UI somewhat nicer than AndStatus’s), and c:geo.

A number of things requires non-free components. As I discussed in my initial writeup from when I started using Replicant I don’t like this, but I’m accepting it temporarily. The list of issues that can be fixed by adding non-free components include the front camera, Bluetooth, GPS, and Wifi. After flashing the Replicant ROM image that I built (using the fine build instructions), I’m using the following script to add the missing non-free files from Cyanogenmod.

# Download Cyanogenmod 10.1.3 (Android 4.2-based) binaries:
# wget http://download.cyanogenmod.org/get/jenkins/42508/cm-10.1.3-i9300.zip
# echo "073a464a9f5129c490502c77374495c38a25ba790c10e27f51b43845baeba6bf  cm-10.1.3-i9300.zip" | sha256sum -c 
# unzip cm-10.1.3-i9300.zip

adb root
adb remount
adb shell mkdir /system/vendor/firmware
adb shell chmod 755 /system/vendor/firmware

# Front Camera
adb push cm-10.1.3-i9300/system/vendor/firmware/fimc_is_fw.bin /system/vendor/firmware/fimc_is_fw.bin
adb push cm-10.1.3-i9300/system/vendor/firmware/setfile.bin /system/vendor/firmware/setfile.bin
adb shell chmod 644 /system/vendor/firmware/fimc_is_fw.bin /system/vendor/firmware/setfile.bin

# Bluetooth
adb push cm-10.1.3-i9300/system/bin/bcm4334.hcd /system/vendor/firmware/
adb shell chmod 644 /system/vendor/firmware/bcm4334*.hcd

# GPS
adb push cm-10.1.3-i9300/system/bin/gpsd /system/bin/gpsd
adb shell chmod 755 /system/bin/gpsd
adb push cm-10.1.3-i9300/system/lib/hw/gps.exynos4.so /system/lib/hw/gps.exynos4.so
adb push cm-10.1.3-i9300/system/lib/libsecril-client.so /system/lib/libsecril-client.so
adb shell chmod 644 /system/lib/hw/gps.exynos4.so /system/lib/libsecril-client.so

# Wifi
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_semcosh /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_semcosh /system/vendor/firmware/

I hope this helps others switch to a better phone environment!

Daniel Pocock: Silent data loss exposed

14 January, 2015 - 03:06

I was moving a large number of image files around and decided to compare checksums after putting them in their new home.

Ouf of several thousand files, about 80GB of data, I found that seventeen of them had a checksum mismatch.

Running md5sum manually on each of those was showing a correct checksum, well, up until the sixth file and then I found this:

$ md5sum DSC_2624.NEF
94fc8d3cdea3b0f3479fa255f7634b5b  DSC_2624.NEF
$ md5sum DSC_2624.NEF
25cf4469f44ae5e5d6a13c8e2fb220bf  DSC_2624.NEF
$ md5sum DSC_2624.NEF
03a68230b2c6d29a9888d2358ed8e225  DSC_2624.NEF

Yes, each time I run md5sum on the same file it gives a different result. Out of the seventeen files, I found one other displaying the same problem and the others gave correct checksums when I ran md5sum manually. Definitely not a healthy disk, or is it?

This is the reason why checksumming filesystems like Btrfs are so important.

There are no errors or warnings in the logs on the system with this disk. Silent data loss at its best.

Is the disk to blame though?

It may be tempting to think this is a disk fault, most people have seen faulty disks at some time or another. In the old days you could often hear them too. There is another possible explanation though: memory corruption. The data read from disk is normally cached in RAM and if the RAM is corrupt, the cache would return bad data.

I dropped the read cache:

# echo 3 > /proc/sys/vm/drop_caches 

and tried md5sum again and observed the file checksum is now correct.

It would appear the md5sum command had been operating on data in the page cache and the root cause of the problem is memory corruption. Time to run a memory test and then replace the RAM in the machine.

Jonathan McDowell: Tracking a ship around the world

13 January, 2015 - 23:07

I moved back from the California Bay Area to Belfast a while back and for various reasons it looks like I'm going to be here a while, so it made sense to have my belongings shipped over here. They haven't quite arrived yet, and I'll do another post about that process once they have, but I've been doing various tweets prefixed with "[shipping]" during the process. Various people I've spoken to (some who should know me better) thought this was happening manually. It wasn't. If you care about how it was done, read on.

I'd been given details of the ship carrying my container, and searching for that turned up the excellent MarineTraffic which let me see the current location of the ship. Turns out ships broadcast their location using AIS and anyone with a receiver can see the info. Very cool, and I spent some time having a look at various bits of shipping around the UK out of interest. I also found the ship's itinerary which give me some idea of where it would be calling and when. Step one was to start recording this data; it was time sensitive and I wanted to be able to see historical data. I took the easy route and set up a cron job to poll the location and itinerary on an hourly basis, and store the results. That meant I had the data over time, if my parsing turned out to miss something I could easily correct it, and that I wasn't hammering Marine Traffic while writing the parsing code.

Next I wanted to parse the results, store them in a more manageable format than the HTML, and alert me when the ship docked somewhere or set off again. I've been trying to learn more Python rather than doing my default action of turning to Perl for these things, and this seemed like a simple enough project to try out. Beautiful Soup seemed to turn up top for HTML parsing in Python, so that formed the basis. Throwing the info into a database so I could do queries felt like the right move so I used SQLite - if this had been more than a one off I'd have considered looking at PostgreSQL and its GIS support. Finally Tweepy made it very easy to tweet from Python in about 4 lines of code. The whole thing weighed in at only 175 lines of code, mostly around pulling the info out of the HTML and then a little to deal with checking for state changes against the current status and the last piece of info in the database.

The pieces of information I chose to store were the time of the update (i.e. when the ship sent it, not when my script ran), reported area, reported state, the position + course, reported origin, reported destination and eta. The fact this is all in a database makes it very easy to do a few queries on the data.

How fast did the ship go?

sqlite> SELECT MAX(speed) FROM status;
MAX(speed)
21.9

What areas did it report?

sqlite> SELECT area FROM status GROUP BY area;
area
-
Atlantic North
California
Caribbean Sea
Celtic Sea
English Channel
Hudson River
Pacific North
Panama Canal

What statuses did we see?

sqlite> SELECT status FROM status GROUP BY status;
status
At Anchor
Moored
Stopped
Underway
Underway using Engine

Finally having hourly data lets me draw a map of where the ship went. The data isn't complete, because the free AIS info depends on the ship being close enough to a receiving station. That means there were a few days in the North Atlantic without updates, for example. However there's enough to give a good idea of just how well traveled my belongings are, and it gave me an excuse to play with OpenLayers.

(Apologies if the zoom buttons aren't working for you here; I think I need to kick the CSS in some manner I haven't quite figured out yet.)

Erich Schubert: Big data predictions for 2015

13 January, 2015 - 22:01
My big data predictions for 2015:
  1. Big data will continue to fail to deliver for most companies.
    This has several reasons, including in particular: 1: lack of data to analyze that actually benefits from big data tools and approaches (and which is not better analyzed with traditional tools). 2: lack of talent, and failure to attract analytics talent. 3: stuck in old IT, and too inflexible to allow using modern tools (if you want to use big data, you will need a flexible "in-house development" type of IT that can install tools, try them, abandon them, without going up and down the management chains) 4: too much marketing. As long as big data is being run by the marketing department, not by developers, it will fail.
  2. Project consolidation: we have seen hundreds of big data software projects the last years. Plenty of them on Apache, too. But the current state is a mess, there is massive redundancy, and lots and lots of projects are more-or-less abandoned. Cloudera ML, for example, is dead: superseded by Oryx and Oryx 2. More projects will be abandoned, because we have way too many (including much too many NoSQL databases, that fail to outperform SQL solutions like PostgreSQL). As is, we have dozens of competing NoSQL databases, dozens of competing ML tools, dozens of everything.
  3. Hype: the hype will continue, but eventually (when there is too much negative press on the term "big data" due to failed projects and inflated expectations) move on to other terms. The same is also happening to "data science", so I guess the next will be "big analytics", "big intelligence" or something like that.
  4. Less openness: we have seen lots of open-source projects. However, many decided to go with Apache-style licensing - always ready to close down their sharing, and no longer share their development. In 2015, we'll see this happen more often, as companies try to make money off their reputation. At some point, copyleft licenses like GPL may return to popularity due to this.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้