Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 17 min 17 sec ago

Gustavo Noronha Silva: A tale of cylinders and shadows

22 November, 2016 - 00:04

Like I wrote before, we at Collabora have been working on improving WebKitGTK+ performance for customer projects, such as Apertis. We took the opportunity brought by recent improvements to WebKitGTK+ and GTK+ itself to make the final leg of drawing contents to screen as efficient as possible. And then we went on investigating why so much CPU was still being used in some of our test cases.

The first weird thing we noticed is performance was actually degraded on Wayland compared to running under X11. After some investigation we found a lot of time was being spent inside GTK+, painting the window’s background.

Here’s the thing: the problem only showed under Wayland because in that case GTK+ is responsible for painting the window decorations, whereas in the X11 case the window manager does it. That means all of that expensive blurring and rendering of shadows fell on GTK+’s lap.

During the web engines hackfest, a couple of months ago, I delved deeper into the problem and noticed, with Carlos Garcia’s help, that it was even worse when HiDPI displays were thrown into the mix. The scaling made things unbearably slower.

You might also be wondering why would painting of window decorations be such a problem, anyway? They should only be repainted when a window changes size or state anyway, which should be pretty rare, right? Right, that is one of the reasons why we had to make it fast, though: the resizing experience was pretty terrible. But we’ll get back to that later.

So I dug into that, made a few tries at understanding the issue and came up with a patch showing how applying the blur was being way too expensive. After a bit of discussion with our own Pekka Paalanen and Benjamin Otte we found the root cause: a fast path was not being hit by pixman due to the difference in scale factors on the shadow mask and the target surface. We made the shadow mask scale the same as the surface’s and voilà, sane performance.

I keep talking about this being a performance problem, but how bad was it? In the following video you can see how huge the impact in performance of this problem was on my very recent laptop with a HiDPI display. The video starts with an Epiphany window running with a patched GTK+ showing a nice demo the WebKit folks cooked for CSS animations and 3D transforms.

After a few seconds I quickly alt-tab to the version running with unpatched GTK+ – I made the window the exact size and position of the other one, so that it is under the same conditions and the difference can be seen more easily. It is massive.

Yes, all of that slow down was caused by repainting window shadows! OK, so that solved the problem for HiDPI displays, made resizing saner, great! But why is GTK+ repainting the window even if only the contents are changing, anyway? Well, that turned out to be an off-by-one bug in the code that checks whether the invalidated area includes part of the window decorations.

If the area being changed spanned the whole window width, say, it would always cause the shadows to be repainted. By fixing that, we now avoid all of the shadow drawing code when we are running full-window animations such as the CSS poster circle or gtk3-demo’s pixbufs demo.

As you can see in the video below, the gtk3-demo running with the patched GTK+ (the one on the right) is using a lot less CPU and has smoother animation than the one running with the unpatched GTK+ (left).

Pretty much all of the overhead caused by window decorations is gone in the patched version. It is still using quite a bit of CPU to animate those pixbufs, though, so some work still remains. Also, the overhead added to integrate cairo and GL rendering in GTK+ is pretty significant in the WebKitGTK+ CSS animation case. Hopefully that’ll get much better from GTK+ 4 onwards.

Reproducible builds folks: Reproducible Builds: week 82 in Stretch cycle

21 November, 2016 - 19:47

What happened in the Reproducible Builds effort between Sunday November 13 and Saturday November 19 2016:

Media coverage Elsewhere in Debian
  • dpkg 1.18.14 has migrated to stretch.

  • Chris Lamb filed #844431 ("packages should build reproducibly") against debian-policy.

  • Ximin worked on glibc reproducibility this week, catching some bugs in disorderfs, FUSE, as well as glibc itself.

Documentation update Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

43 package reviews have been added, 4 have been updated and 12 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

4 issue types have been added:

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (26)
  • Daniel Stender (1)
  • Filip Pytloun (1)
  • Lucas Nussbaum (28)
  • Michael Biebl (1)
strip-nondeterminism development disorderfs development
  • #844498 ("disorderfs: using it for building kills the host")
debrebuild development

debrebuild is new tool proposed by HW42 and josch (see #774415: "From srebuild sbuild-wrapper to debrebuild").

debrepatch development

debrepatch is a set of scripts that we're currently developing to make it easier to track unapplied patches. We have a lot of those and we're not always sure if they still work. The plan is to set up jobs to automatically apply old reproducibility patches to newer versions of packages and notify the right people if they don't apply and/or no longer make the package reproducible.

debpatch is a component of debrepatch that applies debdiffs to Debian source packages. In other words, it is to debdiff(1) what patch(1) is to diff(1). It is a general tool that is not specific to Reproducible Builds. This week, Ximin Luo worked on making it more "production-ready" and will soon submit it for inclusion in devscripts.

reprotest development

Ximin Luo significantly improved reprotest, adding presets and auto-detection of which preset to use. One can now run e.g. reprotest auto . or reprotest auto $pkg_$ver.dsc instead of the long command lines that were needed before.

He also made it easier to set up build dependencies inside the virtual server and made it possible to specify pre-build dependencies that reprotest itself needs to set up the variations. Previously one had to manually edit the virtual server to do that, which was not very usable to humans without an in-depth knowledge of the building process.

These changes will be tested some more and then released in the near future as reprotest 0.4.

tests.reproducible-builds.org
  • Debian:

    • An index of our usertagged bugs page was added by Holger after a Q+A session in Cambridge.
    • Holger also setup two new i386 builders, build12+16, for >50% increased build performance. For this, we went from 18+17 cores on two 48GB machines to 10+10+9+9 cores on four 36GB ram machines… and from 16 to 24 builder jobs. Thanks to Profitbricks for providing us with all these ressources once more!
    • h01ger also tried to enable disorderfs again, but hit #844498, which brought down the i386 builders, so he disabled it again. Next will be trying disorderfs on armhf or amd64, to see whether this bug also manifests there.
Misc.

This week's edition was written by Chris Lamb, Holger Levsen, Ximin Luo and reviewed by a bunch of Reproducible Builds folks on IRC.

Arturo Borrero González: Great Debian meeting in Seville

21 November, 2016 - 12:00

Last week we had an interesting Debian meeting in Seville, Spain. This has been the third time (in recent years) the local community meets around Debian.

We met at about 20:00 at Rompemoldes, a crafts creation space. There we had a very nice dinner while talking about Debian and FLOSS. The dinner was sponsored by the Plan4D assosiation.

The event was joined by almost 20 people which different relations to Debian:

  • Debian users
  • DDs
  • Debian contributors
  • General FLOSS interested people

I would like to thank all the attendants and Pablo Neira from Plan4D for the organization.

I had to leave the event after 3.5 hours of great talking and networking, but the rest of the people stayed there. The climate was really good :-)

Looking forward to another meeting in upcomings times!

Header picture by Ana Rey.

Steve Kemp: Detecting fraudulent signups?

21 November, 2016 - 10:45

I run a couple of different sites that allow users to sign-up and use various services. In each of these sites I have some minimal rules in place to detect bad signups, but these are a little ad hoc, because the nature of "badness" varies on a per-site basis.

I've worked in a couple of places where there are in-house tests of bad signups, and these usually boil down to some naive, and overly-broad, rules:

  • Does the phone numbers' (international) prefix match the country of the user?
  • Does the postal address supplied even exist?

Some places penalise users based upon location too:

  • Does the IP address the user submitted from come from TOR?
  • Does the geo-IP country match the users' stated location?
  • Is the email address provided by a "free" provider?

At the moment I've got a simple HTTP-server which receives a JSON post of a new users' details, and returns "200 OK" or "403 Forbidden" based on some very very simple critereon. This is modeled on the spam detection service for blog-comments server I use - something that is itself becoming less useful over time. (Perhaps time to kill that? A decision for another day.)

Unfortunately this whole approach is very reactive, as it takes human eyeballs to detect new classes of problems. Code can't guess in advance that it should block usernames which could collide with official ones, for example allowing a username of "admin", "help", or "support".

I'm certain that these systems have been written a thousand times, as I've seen at least five such systems, and they're all very similar. The biggest flaw in all these systems is that they try to classify users in advance of them doing anything. We're trying to say "Block users who will use stolen credit cards", or "Block users who'll submit spam", by correlating that behaviour with other things. In an ideal world you'd judge users only by the actions they take, not how they signed up. And yet .. it is better than nothing.

For the moment I'm continuing to try to make the best of things, at least by centralising the rules for myself I cut down on duplicate code. I'll pretend I'm being cool, modern, and sexy, and call this a micro-service! (Ignore the lack of containers for the moment!)

Steinar H. Gunderson: Nageru documentation

21 November, 2016 - 04:45

Even though the World Chess Championship takes up a lot of time these days, I've still found some time for Nageru, my live video mixer. But this time it doesn't come in form of code; rather, I've spent my time writing documentation.

I spent some time fretting over what technical solution I wanted. I explicitly wanted end-user documentation, not developer documentation—I rarely find HTML-rendered versions of every member function in a class the best way to understand a program anyway. Actually, on the contrary: Having all sorts of syntax interwoven in class comments tends to be more distracting than anything else.

Eventually I settled on Sphinx, not because I found it fantastic (in particular, ReST is a pain with its bizarre variable punctuation-based syntax), but because I'm convinced it has all the momentum right now. Just like git did back in the day, the fact that the Linux kernel has chosen it means it will inevitably grow a quite large ecosystem, and I won't be ending up having to maintain it anytime soon.

I tried finding a balance between spending time on installation/setup (only really useful for first-time users, and even then, only a subset of them), concept documentation (how to deal with live video in general, and how Nageru fits into a larger ecosystem of software and equipment) and more concrete documentation of all the various features and quirks of Nageru itself. Hopefully, most people will find at least something that's not already obvious to them, without drowning in detail.

You can read the documentation at https://nageru.sesse.net/doc/, or if you want to send patches, the right place to patch is the git repository.

Dirk Eddelbuettel: BH 1.62.0-1

19 November, 2016 - 20:07

The BH package on CRAN was updated to version 1.62.0. BH provides a large part of the Boost C++ libraries as a set of template headers for use by R, possibly with Rcpp as well as other packages.

This release upgrades the version of Boost to the upstream version Boost 1.62.0, and adds three new libraries as shown in the brief summary of changes from the NEWS file which follows below.

Special thanks to Kurt Hornik and Duncan Murdoch for help tracking down one abort() call which was seeping into R package builds, and then (re-)testing the proposed fix. We are now modifying one more file ever so slightly to use ::Rf_error(...) instead.

Changes in version 1.62.0-1 (2016-11-15)
  • Upgraded to Boost 1.62 installed directly from upstream source

  • Added Boost property_tree as requested in #29 by Aydin Demircioglu

  • Added Boost scope_exit as requested in #30 by Kirill Mueller

  • Added Boost atomic which we had informally added since 1.58.0

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHubGitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Keith Packard: AltOS-Lisp-2

19 November, 2016 - 16:20
Updates to Altos Lisp

I wrote a few days ago about a tiny lisp interpreter I wrote for AltOS

Really, it's almost "done" now, I just wanted to make a few improvements

Incremental Collection

I was on a walk on Wednesday when I figured out that I didn't need to do a full collection every time; a partial collection that only scanned the upper portion of memory would often find plenty of free space to keep working for a while.

To recap, the heap is in two pieces; the ROM piece and the RAM piece. The ROM piece is generated during the build process and never changes afterwards (hence the name), so the only piece which is collected is the RAM piece. Collection works like:

chunk_low = heap base
new_top = heap base

For all of the heap
    Find the first 64 live objects above chunk_low
    Compact them all to new_top
    Rewrite references in the whole heap for them
    Set new_top above the new locations
    Set chunk_low above the old locations

top = new_top

The trick is to realize that there's really no need to start at the bottom of the heap; you can start anywhere you like and compact stuff, possibly leaving holes below that location in the heap. As the heap tends to have long-lived objects slowly sift down to the beginning, it's useful to compact objects higher than that, skipping the compaction process for the more stable area in memory.

Each time the whole heap is scanned, the top location is recorded. After that, incremental collects happen starting at that location, and when that doesn't produce enough free space, a full collect is done.

The collector now runs a bunch faster on average now.

Binary Searches

I stuck some linear searches in a few places in the code, the first was in the collector when looking to see where an object had moved to. As there are 64 entries, the search is reduced from 32 to 6 compares on average. The second place was in the frame objects, which hold the list of atom/value bindings for each lexical scope (including the global scope). These aren't terribly large, but a binary search is still a fine plan. I wanted to write down here the basic pattern I'm using for binary searches these days, which avoids some of the boundary conditions I've managed to generate in the past:

int find (needle) {
    int l = 0;
    int r = count - 1;
    while (l <= r) {
        int m = (l + r) >> 1;
        if (haystack[m] < needle)
            l = m + 1;
        else
            r = m - 1;
    }
    return l;
}

With this version, the caller can then check to see if there's an exact match, and if not, then the returned value is the location in the array where the value should be inserted. If the needle is known to not be in the haystack, and if the haystack is large enough to accept the new value:

void insert(needle) {
    int l = find(needle);

    memmove(&haystack[l+1],
        &haystack[l],
        (num - l) * sizeof (haystack[0]));

    haystack[l] = needle;
}

Similarly, if the caller just wants to know if the value is in the array:

bool exists(needle) {
    int l = find(needle);

    return (l < count && haystack[l] == needle);
}
Call with Current Continuation

Because the execution stack is managed on the heap, it's completely trivial to provide the scheme-like call with current continuation, which constructs an object which can be 'called' to transfer control to a saved location:

> (+ "hello " (call/cc (lambda (return) (setq boo return) (return "foo "))) "world")
"hello foo world"
> (boo "bar ")
"hello bar world"
> (boo "yikes ")
"hello yikes world"

One thing I'd done previously is dump the entire state of the interpreter on any error, and that included a full stack trace. I adopted that code for printing of these continuation objects:

boo
[
    expr:   (call/cc (lambda (return) (set (quote boo) return) (return "foo ")))
    state:  val
    values: (call/cc
             [recurse...]
             )
    sexprs: ()
    frame:  {}
]
[
    expr:   (+ "hello " (call/cc (lambda (return) (set (quote boo) return) (return "foo "))) "world")
    state:  formal
    values: (+
             "hello "
             )
    sexprs: ((call/cc (lambda (return) (set (quote boo) return) (return "foo ")))
             "world"
             )
    frame:  {}
]

The top stack frame is about to return from the call/cc spot with a value; supply a value to 'boo' and that's where you start. The next frame is in the middle of computing formals for the + s-expression. It's found the + function, and the "hello " string and has yet to get the value from call/cc or the value of the "world" string. Once the call/cc "returns", that value will get moved to the values list and the sexpr list will move forward one spot to compute the "world" value.

Implementing this whole mechanism took only a few dozen lines of code as the existing stack contexts were already a continuation in effect. The hardest piece was figuring out that I needed to copy the entire stack each time the continuation was created or executed as it is effectively destroyed in the process of evaluation.

I haven't implemented dynamic-wind yet; when I did that for nickle, it was a bit of a pain threading execution through the unwind paths.

Re-using Frames

I decided to try and re-use frames (those objects which hold atom/value bindings for each lexical scope). It wasn't that hard; the only trick was to mark frames which have been referenced from elsewhere as not-for-reuse and then avoid sticking those in the re-use queue. This reduced allocations even further so that for simple looping or tail-calling code, the allocator may never end up being called.

How Big Is It?

I've managed to squeeze the interpreter and all of the rest of the AltOS system into 25kB of Cortex-M0 code. That leaves space for the 4kB boot loader and 3kB of flash to save/restore the 3kB heap across resets.

Adding builtins to control timers and GPIOs would make this a reasonable software load for an Arduino; offering a rather different programming model for those with a taste for adventure. Modern ARM-based Arduino boards have plenty of flash and ram for this. It might be interesting to get this running on the Arduino Zero; there's no real reason to replace the OS either; porting the lisp interpreter into the bare Arduino environment wouldn't take long.

Dirk Eddelbuettel: Rcpp 0.12.8: And more goodies

18 November, 2016 - 18:41

Yesterday the eighth update in the 0.12.* series of Rcpp made it to the CRAN network for GNU R where the Windows binary has by now been generated too; the Debian package is on its way as well. This 0.12.8 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, and the 0.12.7 release in September --- making it the twelveth release at the steady bi-montly release frequency. While we are keeping with the pattern, we have managed to include quite a lot of nice stuff in this release. None of it is a major feauture, though, and so we have not increased the middle number.

Among the changes this release are (once again) much improved exception handling (thanks chiefly to Jim Hester), better large vector support (by Qiang), a number of Sugar extensions (mostly Nathan, Qiang and Dan) and beginnings of new DateVector and DatetimeVectors classes, and other changes detailed below. We plan to properly phase in the new date(time) classes. For now, you have to use a #define such as this one in Rcpp.h which remains commented-out for now. We plan to switch this on as the new default no earlier than twelve months from now.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 843 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by eightyfour packages, or a full ten percent, just since the last release in early September!

Again, we are lucky to have such a large group of contributors. Among them, we have invited Nathan Russell to the Rcpp Core team given his consistently excellent pull requests (as well as many outstanding Stackoverflow answers for Rcpp). More details on changes are below.

Changes in Rcpp version 0.12.8 (2016-11-16)
  • Changes in Rcpp API:

    • String and vector elements now use extended R_xlen_t indices (Qiang in PR #560)

    • Hashing functions now return unsigned int (Qiang in PR #561)

    • Added static methods eye(), ones(), and zeros() for select matrix types (Nathan Russell in PR #569)

    • The exception call stack is again correctly reported; print methods and tests added too (Jim Hester in PR #582 fixing #579)

    • Variatic macros no longer use a GNU extensions (Nathan in PR #575)

    • Hash index functions were standardized on returning unsigned integers (Also PR #575)

  • Changes in Rcpp Sugar:

    • Added new Sugar functions rowSums(), colSums(), rowMeans(), colMeans() (PR #551 by Nathan Russell fixing #549)

    • Range Sugar now used R_xlen_t type for start/end (PR #568 by Qiang Kou)

    • Defining RCPP_NO_SUGAR no longer breaks the build. (PR #585 by Daniel C. Dillon)

  • Changes in Rcpp unit tests

    • A test for expression vectors was corrected.

    • The constructor test for datetime vectors reflects the new classes which treats Inf correctly (and still as a non-finite value)

  • Changes in Rcpp Attributes

    • An 'empty' return was corrected (PR #589 fixing issue #588, and with thanks to Duncan Murdoch for the heads-up)
  • Updated Date and Datetime vector classes:

    • The DateVector and DatetimeVector classes were renamed with a prefix old; they are currently typedef'ed to the existing name (#557)

    • New variants newDateVector and newDatetimeVector were added based on NumericVector (also #557, #577, #581, #587)

    • By defining RCPP_NEW_DATE_DATETIME_VECTORS the new classes can activated. We intend to make the new classes the default no sooner than twelve months from this release.

    • The capabilities() function can also be used for presence of this feature

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible builds folks: Reproducible Builds: week 81 in Stretch cycle

18 November, 2016 - 02:46

What happened in the Reproducible Builds effort between Sunday November 6 and Saturday November 12 2016:

Media coverage

Matthew Garrett blogged about Tor, TPMs and service integrity attestation and how reproducible builds are the base for systems integrity.

The Linux Foundation announced renewed funding for us as part of the Core Infrastructure Initiative. Thank you!

Outreachy updates

Maria Glukhova has been accepted into the Outreachy winter internship and will work with us the Debian reproducible builds team.

To quote her words

siamezzze: I've been accepted to #outreachy winter internship - going to
work with Debian reproducible builds team. So excited about that! <3
Debian
Toolchain development and fixes

dpkg:

  • Thanks to a series of dpkg uploads by Guillem Jover, all our toolchain changes are now finally available in sid!
  • This means your packages should now be reproducible without having to use our custom APT repository.
  • Ximin Luo opened #843925 to remind the fact that dpkg-buildpackage should sign buildinfo files.
  • We hope to have detailed post about the new dpkg and the new .buildinfo files for debian-devel-announce soon!

debrebuild:

  • srebuild / debrebuild work was resumed by Johannes Schauer and others in #774415.
Bugs filed

Chris Lamb:

Daniel Shahaf:

Niko Tyni:

Reiner Herrman:

Reviews of unreproducible packages

136 package reviews have been added, 5 have been updated and 7 have been removed in this week, adding to our knowledge about identified issues.

3 issue types have been updated:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (29)
  • Niko Tyni (1)
diffoscope development

A new version of diffoscope 62~bpo8+1 was uploaded to jessie-backports by Mattia Rizzolo.

Meanwhile in git, Ximin Luo greatly improved speed by fixing a O(n2) lookup which was causing diffs of large packages such as GCC and glibc to take many more hours than was necessary. When this commit is released, we should hopefully see full diffs for such packages again. Currently we have 197 source packages which - when built - diffoscope fails to analyse.

buildinfo.debian.net development
  • Submissions with duplicate Installed-Build-Depends entries are rejected now that a bug in dpkg causing them has been fixed. Thanks to Guillem Jover.
  • Add a new page for every (source, version) combination, for example diffoscope 62.
  • DigitalOcean have generously offered to sponsor the hardware buildinfo.debian.net is running on.
tests.reproducible-builds.org

Debian:

  • For privacy reasons, the new dpkg-genbuildinfo includes Build-Path only if it is under /build. HW42 updated our jobs so this is the case for our builds too, so you can see the build path in the .buildinfo files.
  • HW42 also updated our jobs to vary the basename of the source extraction directory. This detects packages that incorrectly assume a $pkg-$version directory naming scheme (which is what dpkg-source -x gives but is not mandated by Debian nor always-true) or that they're being built from a SCM.
  • The new dpkg-genbuildinfo also records a sanitised Environment. This is different in our builds, so HW42, Reiner and Holger updated our jobs to hide these differences from diffoscope output.
  • Package-set improvements:
  • Valerie Young contributed four patches for our long-planned transition from SQLite to PostgreSQL.
  • In anticipation of the freeze, already-tested packages from unstable and testing on amd64 are now scheduled with equal priority.
reproducible-builds.org website

F-Droid was finally added to our list of partner projects. (This was an oversight and they had already been working with us for some time.)

Misc.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Gunnar Wolf: Book presentation by @arenitasoria: Hacker ethics, security and surveillance

18 November, 2016 - 02:24

At the beginning of this year, Irene Soria invited me to start a series of talks on the topic of hacker ethics, security and surveillance. I presented a talk titled Cryptography and identity: Not everything is anonymity.

The talk itself is recorded and available in archive.org (sidenote: I find it amazing that Universidad del Claustro de Sor Juana uses archive.org as their main multimedia publishing platform!)

But as part of this excercise, Irene invited me to write a chapter for a book covering the series. And, yes, she delivered!

So, finally, we will have the book presentation:

I know, not everybody following my posts (that means... Only those at or near Mexico City) will be able to join. But the good news: The book, as soon as it is presented, will be published under a CC BY-SA license. Of course, I will notify when it is ready.

Urvika Gola: Reaching out to Outreachy

18 November, 2016 - 01:51

The past few weeks have been a whirlwind of work with my application process for Outreachy taking my full attention.

When I got to know about Outreachy, I was intrigued as well as dubious.  I had many unanswered questions in my mind. Honestly, I had that fear of failure which prevented me from submitting my partially filled application form. I kept contemplating if the application was ‘good enough’, if my answers were ‘perfect’ and had the right balance of du-uh and oh-ah!
In moments of doubt, it is important to surround yourself with people who believe in you more than you believe in yourself. I’ve been fortunate enough to have two amazing people, my sister Anjali, who is an engineer at Intel and my friend, Pranav  who completed his GSoC 16 with Debian. They believed in me when I sat staring at my application and encouraged me to click that final button.

When I initially applied for Outreachy, I was given a task for building Lumicall and subsequent task was to examine a BASH script which solves the DNS-01 Challenge.
I deployed the DNS-01 challenge in Java and tested my solution against a server.
Within a limited time frame, I figured things out and wrote my solution in Java and then eagerly  waited for the results to come out. Going through a full cycle of :

I was elated with joy when I got to know I’ve been selected for Outreachy to work with Debian. I was excited about open source & found the idea of working on the project open source fun because of the numerous possibilities of contributing towards a  voice video and chat communication software.

My project mentor, Daniel Pocock, played a pivotal role in the time after I had submitted my application. Like a true mentor, he replied to my queries promptly and guided me towards finding the solutions to problems on my own. He exemplified how to feel comfortable with developing on open source. I felt inspired and encouraged to move along in my work.

Beyond him, The MiniDebConf  was when I was finally introduced to the Debian community. It was an overwhelming experience and I felt proud to have come so far..  It was pretty cool to see JitsiMeet being used for this video call. I was also introduced to two of my mentors , Juliana Louback & Bruno Magalhães . I am very excited to learn from them.

I am glad I applied for Outreachy which helped me identify my strengths and I am totally excited to be working with Debian on the project and learn as much as I can throughout the period.

I am not a blog person, this is my first blog ever! I would love to share my experience with you all in the hopes of inspiring someone else who is afraid of clicking that final button!


Jonathan Dowland: Docker lecture

17 November, 2016 - 22:43

This morning I gave a guest lecture to students at the EPSRC Centre for Doctoral Training in Cloud Computing for Big Data. The subject was a gentle introduction to Docker.

This was the first guest lecture I've given for a couple of years so I thought I was a little rusty but I had a good time giving it and hopefully it went across OK.

slides.pdf; handouts.pdf (3 slides to a page with space for notes); demo steps.txt (the steps I followed for some of the demos). The slides are probably not that much use without the context of being in the lecture; I'll add my presenter notes and post an update when I've done that.

I mentioned a couple of things worth linking to here

There was some discussion about alternatives to Docker, things which were briefly mentioned include

Jonathan Dowland: slides.pdf

17 November, 2016 - 22:33

Jonathan Dowland: handouts.pdf

17 November, 2016 - 22:33

Uwe Kleine-König: Installing Debian Stretch on a Turris Omnia

17 November, 2016 - 16:36

Recently I got "my" Turris Omnia and it didn't take long to replace the original firmware with Debian.

If you want to reproduce, here is what you have to do:

Open the case of the Turris Omnia, connect the hacker pack (or an RS232-to-TTL adapter) to access the U-Boot prompt (see Turris Omnia: How to use the "Hacker pack"). Then download the installer and device tree:

# cd /srv/tftp
# wget https://d-i.debian.org/daily-images/armhf/daily/netboot/vmlinuz
# wget https://d-i.debian.org/daily-images/armhf/daily/netboot/initrd.gz
# wget https://www.kleine-koenig.org/tmp/armada-385-turris-omnia.dtb

(The latter is not included yet in Debian, but I'm working on that.)

and after connecting the Turris Omnia's WAN to a dhcp managed network start it in U-Boot:

dhcp
setenv serverip 192.168.1.17
tftpboot 0x01000000 vmlinuz
tftpboot 0x02000000 armada-385-turris-omnia.dtb
tftpboot 0x03000000 initrd.gz
bootz 0x01000000 0x03000000:$filesize 0x02000000

With 192.168.1.17 being the IPv4 of the machine you have the tftp server running.

I suggest to use btrfs as rootfs because that works well with U-Boot. Before finishing the installation put the dtb in the rootfs as /boot/dtb.

To then boot into Debian do in U-Boot:

setenv mmcboot=btrload mmc 0 0x01000000 boot/vmlinuz\; btrload mmc 0 0x02000000 boot/dtb\; btrload mmc 0 0x03000000 boot/initrd.img\; bootz 0x01000000 0x03000000:$filesize 0x02000000
setenv bootargs console=ttyS0,115200 rootfstype=btrfs rootdelay=2 root=/dev/mmcblk0p1 rootflags=commit=5 rw
saveenv
boot

Known issues:

  • rtc doesn't work (workaround: mw 0xf10184a0 0xfd4d4cfa in U-Boot)
  • SFP and switch don't work, MAC addresses are random
  • wifi fails to probe

If you have problems, don't hesitate to contact me.

Also check the Debian Wiki for further details.

Martín Ferrari: Replacing proprietary Play Services with MicroG

17 November, 2016 - 12:59

For over a year now, I have been using CyanogenMod in my Nexus phone. At first I just installed some bundle that brought all the proprietary Google applications and libraries, but later I decided that I wanted more control over it, so I did a clean install with no proprietary stuff.

This was not so great at the beginning, because the base system lacks the geolocation helpers that allow you to get a position in seconds (using GSM antennas and Wifi APs). And so, every time I opened OsmAnd (a great maps application, free software and free maps), I would have to wait minutes for the GPS radio to locate enough satellites.

Shortly after, I found about the UnifiedNLP project that provided a drop-in replacement for the location services, using pluggable location providers. This worked great, and you could choose to use the Apple or Mozilla on-line providers, or off-line databases that you could download or build yourself.

This worked well for most of my needs, and I was happy about it. I also had F-droid for installing free software applications, DAVdroid for contacts and calendar synchronisation, K9 for reading email, etc. I still needed some proprietary apps, but most of the stuff in my phone was Free Software.

But sadly, more and more application developers are buying into the vendor lock-in that is Google Play Services, which is a set of APIs that offer some useful functionality (including the location services), but that require non-free software that is not part of the AOSP (Android Open-Source Project). Mostly, this is because they make push notifications a requirement, or because they want you to be able to buy stuff from the application.

This is not limited to proprietary software. Most notably, the Signal project refuses to work without these libraries, or even to distribute the pre-compiled binaries on any platform that is not Google Play! (This is one of many reason why I don't recommend Signal to anybody).

And of course, many very useful services that people use every day require you to install proprietary applications, which care much less about your choices of non-standard platforms.

For the most part, I had been able to just get the package files for these applications1 from somewhere, and have the functionality I wanted.

Some apps would just work perfectly, others would complain about the lack of Play Services, but offer a degraded experience. You would not get notifications unless you re-opened the application, stuff like that. But they worked. Lately, some of the closed-source apps I sometimes use stopped working altogether.

So, tired of all this, I decided to give the MicroG project a try.

MicroG

MicroG is a direct descendant of UnifiedNLP and the NOGAPPS project. I had known about it for a while, but the installation procedures always put me off.

LWN published an article about them recently, and so I decided to spend a good chunk of time making it work. This blog post is to help making this a bit less painful.

Some prerequisites:

  • No Gapps installed.
  • Rooted phone (at least for the mechanism I describe here).
  • Working ADB with root access to the phone.
  • UnifiedNLP needs to be uninstalled (MicroG carries its own version of it).
Signature spoofing

The main problem with the installation, is that MicroG needs to pretend to be the original Google bundle. It has to show the same name, but most importantly, it has to spoof its cryptographic signatures so other applications do not realise it is not the real thing.

OmniROM and MarshROM (other alternative firmwares for Android) provide support for signature spoofing out of the box. If you are running these, go ahead and install MicroG, it will be very easy! Sadly, the CyanogenMod refused allowing signature spoofing, citing security concerns2, so most users will have to go the hard way.

The options for enabling this feature are basically two: either patch some core libraries H4xx0r style, or use the "Xposed framework". Since I still don't really understand what this Xposed thing is, and it is one of these projects that distributes files on XDA forums, I decided to go the dirty way.

Patching the ROM

Note: this method requires a rooted phone, java, and adb.

The MicroG wiki links to three different projects for patching your ROM, but turns out that two of them would not work at all with CyanogenMod 11 and 13 (the two versions I tried), because the system is "odexed" (whatever that means, the Android ecosystem is really annoying).

I actually upgraded CM to version 13 just to try this, so a couple of hours went there, with no results.

The project that did work is Haystack by Lanchon, which seems to be the cleaner and better developed of the three. Also the one with the most complex documentation.

In a nutshell, you need to download a bunch of files from the phone, apply a series of patches to it, and then reupload it.

To obtain the files and place them in the maguro (the codename for my phone) directory:

$ ./pull-fileset maguro

Now you need to apply some patches with the patch-fileset script. The patches are located in the patches directory:

$ ls patches
sigspoof-core
sigspoof-hook-1.5-2.3
sigspoof-hook-4.0
sigspoof-hook-4.1-6.0
sigspoof-hook-7.0
sigspoof-ui-global-4.1-6.0
sigspoof-ui-global-7.0

The patch-fileset script takes these parameters:

patch-fileset <patch-dir> <api-level> <input-dir> [ <output-dir> ]

Note that this requires knowing your Android version, and the API level. If you don't specify an output directory, it will append the patch name to the input directory name. Note that you should also check the output of the script for any warnings or errors!

First, you need to apply the patch that hooks into your specific version of Android (6.0, API level 23 in my case):

$ ./patch-fileset patches/sigspoof-hook-4.1-6.0 23 maguro
>>> target directory: maguro__sigspoof-hook-4.1-6.0
[...]
*** patch-fileset: success

Now, you need to add the core spoofing module:

$ ./patch-fileset patches/sigspoof-core 23 maguro__sigspoof-hook-4.1-6.0
>>> target directory: maguro__sigspoof-hook-4.1-6.0__sigspoof-core
[...]
*** patch-fileset: success

And finally, add the UI elements that let you enable or disable the signature spoofing:

$ ./patch-fileset patches/sigspoof-ui-global-4.1-6.0 23 maguro__sigspoof-hook-4.1-6.0__sigspoof-core
>>> target directory: maguro__sigspoof-hook-4.1-6.0__sigspoof-core__sigspoof-ui-global-4.1-6.0
[...]
*** patch-fileset: success

Now, you have a bundle ready to upload to your phone, and you do that with the push-fileset script:

$ ./push-fileset maguro__sigspoof-hook-4.1-6.0__sigspoof-core__sigspoof-ui-global-4.1-6.0

After this, reboot your phone, go to settings / developer settings, and at the end you should find a checkbx for "Allow signature spoofing" which you should now enable.

Installing MicroG

Now that the difficult part is done, the rest of the installation is pretty easy. You can add the MicroG repository to F-Droid and install the rest of the modules from there. Check the installation guide for all the details.

Once all the parts are in place, and after a last reboot, you should find a MicroG settings icon that will check that everything is working correctly, and give you the choice of which components to enable.

So far, other applications believe this phone has nothing weird, I get quick GPS fixes, push notifications seem to work... Not bad at all for such a young project!

Hope this is useful. I would love to hear your feedback in the comments!

  1. Which is a pretty horrible thing, having to download from fishy sites because Google refuses to let you use the marketplace without their proprietary application. I used to use a chrome extension to trick Google Play into believing your phone was downloading it, and so you could get the APK file, but now that I have no devices running stock Android, that does not work any more. ↩

  2. Actually, I am still a bit concerned about this, because it is not completely clear to me how protected this is against malicious actors (I would love to get feedback on this!). ↩

Comment

Carl Chenet: Retweet 0.10: Automatically retweet now using regex

17 November, 2016 - 06:00

Retweet 0.10, a self-hosted Python app to automatically retweet and like tweets from another user-defined Twitter account, was released this November, 17th.

With this release Retweet is now able to retweet only if a tweet matches a user-provided regular expression (regex) pattern. This feature was fully provided by Vanekjar, lots of thanks to him!

Retweet 0.10 is already in production for Le Journal du hacker, a French-speaking Hacker News-like website, LinuxJobs.fr, a French-speaking job board and this very blog.

What’s the purpose of Retweet?

Let’s face it, it’s more and more difficult to communicate about our projects. Even writing an awesome app is not enough any more. If you don’t appear on a regular basis on social networks, everybody thinks you quit or that the project is stalled.

But what if you already have built an audience on Twitter for, let’s say, your personal account. Now you want to automatically retweet and like all tweets from the account of your new project, to push it forward.

Sure, you can do it manually, like in the old good 90’s… or you can use Retweet!

Twitter Out Of The Browser

Have a look at my Github account for my other Twitter automation tools:

  • Feed2tweet, a RSS-to-Twitter command-line tool
  • db2twitter, get data from SQL database (several supported), build tweets and send them to Twitter
  • Twitterwatch, monitor the activity of your Twitter timeline and warn you if no new tweet appears

What about you? Do you use tools to automate the management of your Twitter account? Feel free to give me feedback in the comments below.

Shirish Agarwal: The long tail in a common’s man journey to debconf16 – 2

17 November, 2016 - 03:10

This is an extension of part 1 which I shared few days ago. This would be a longish one so please bear.

First of all somebody emailed me this link so in the future a layover at Doha Airport will be a bit expensive from before, approx INR 700/- added to the ticket costs

Moving on, Let me share an experience I shared one of the last few days I had while I was in Cape Town –

Singer singing some great oldies from 60’s , 70’s till 90’s.

I had booked a place near Long Street, Cape Town using Bernelle’s help. What I had not known at that time that near Long Street there are free walking tours every couple of hours. I took part in all the tours and those were nice experiences. Where they start the walk, there was the gentleman pictured above. I was amazed by this gentleman’s rich voice. He strummed lot of classics from the 60’s, 70’s till the 90’s . I had two coffees and thought I was at a premium rock concert. It was a bitter-sweet experience for me because I could see that he has such prodigious talent and still he had to struggle to survive to make ends meet. I did my 2 bit but wish I could have done something more.

Side note – Before I forget there is one trick of feh which I use to view images without it getting very high-resolution (especially on my low-end systems) –

┌─[shirish@debian] - [/run/user/1000/gvfs/mtp:host=%5Busb%3A001%2C006%5D/Card/DCIM/Camera] - [4621]
└─[$] feh -g 1350x1000 .

This actually makes it far far easier to traverse through the 1000 odd photos of the trip that I have in my personal archive without doing any sort of conversion methodology. Btw, it took me time but finally was able to create an album at gallery.debconf.org . Haven’t been able to upload photos as came across an error which I have shared at https://lists.debconf.org/lurker/message/20161113.215659.fce58823.en.html

Moving on, here’s the funny story/experience I wanted to share –

What happened was this. This is from the Doha Airport. I had seen big buggies (ones similar to golf carts) which was ferrying people from end of the concourse to the other. I had been walking the whole day and even with the horizontal escalators and everything, it takes a toll. I was half-tired, half-sleepy and saw a buggy stationed. From behind it looked like the buggies I had seen. As there was no place to park my behind there, I entered into the buggy and sat there. Around 15-20 minutes later a Doha cop in another buggy came to me and asked me if something had happened ?

I had no clue what he was talking about. He told/shared/asked me in friendly tone whether I had committed a crime or wanted to report a crime. When I replied in negative to both, he asked then why I was sitting there. I replied it was for stretching my legs and it was the buggy which was being used to transport people from A. to B. He gently told me I had entered into the wrong one and it was actually a cop buggy. I couldn’t believe it. He did go his own way as he saw I was dead-tired. After 10-15 minutes, half-believingly I came out of the buggy and to my shock the gentleman was right. There was nothing to do but solder on to find a spot in this big airport. I shared this with few friends and family and managed to elicit few laughs hence sharing.

The somewhat sad one was I had met a couple with a baby. Now as shared before, Most Airports including the Doha Airport is Air-Conditioned/Climate-Controlled and is probably in mid-20’s so it was more than cold for me. The couples with the baby were from Asian sub-continent. From their clothes and the way they were, they were not very well off. I do remember them sharing that they had a death in the family and hence were going. I didn’t know at that point in time that there was something called bereavement fares and if they were able to take opportunity of those tickets. But this is besides the point . The issue was that their baby had been running a high-fever and the A/C was making matters worse. I had seen a pharmacy but no clinic in the airport. It was much later I came across http://dohahamadairport.com/airport-guide/facilities-services/medical-emergencies but as can be seen on the web-page it doesn’t tell whether the services are chargeable or not. I assume it would be paid, although in some of the ‘developed/industrialized’ countries it is rumoured not to be for simple ailments such as the baby was going through. Have no idea if that’s true or not. I also don’t know how it equates with travel insurance as well as most travel insurance is also supposed to help you in situations like these. I was concerned as it was a baby and babies as all know are very very fragile. If anybody has an idea or had similar experience would like to know specifically related to International Airport environment as it has ‘transit’ issues unlike in domestic airports where I don’t think it would be a bit more easy.

Now coming to my own inadequacies/lack of foresight which I had mentioned I will share, I had asked/queried and got to lead a Debian-installation workshop on the Open Debian Day. I had done a few earlier and had installed it a few times on my system and for my friends, relatives and some clients. The only bad experiences I had were to do with UEFI but even those in the jessie releases had got resolved quite a bit, so was pretty confident. The day before the Installfest was to happen, ‘Mensah Nyarko Yaa Dufie’ (one full name) of Ghana approached me to install Debian on her system. I had some older version of the Debian DVD either 8.1 or 8.3 and had known that 8.5 had been released just a few days back. Had seen pretty fast internet (as far as downloading Debian DVD) is concerned hence asked her to wait a bit while I downloaded the newest image. I sha256summed it to make sure that the image was bit-to-bit perfect.

Now I hadn’t bought a pen drive/disk from India as I was under the impression that in such conferences, pen drives should not be an issue. I had asked Bernelle privately before via e-mail as well and she had assured me that some pen-drives would be available. She gave me a handful of HP pen drives. The pen drives as we came know during our usage were somewhat flaky. It would pop out/lose connection even with the slightest nudge to the lappy.

Somehow I was able to transfer the image to the usb disk. As people say hindsight is 50:50 maybe it was not such a smart move on my part to download the big DVD image and maybe I should have got the netinstall iso . Be careful, the link I have just shared is of the old version, if you have good web link and want to try the newest stable netinstall head to cdimage.debian.org . Apart from that goof-up I dunno (still) of anyway to know if a copy from an .iso image to usb was successful or not and did it do correctly –

I did the following command –

sudo dd if=/path_to/debian-dvd.iso of=/dev/usb-mount-point

which is usually /dev/sdb on all of my systems . Her system was a brand new HP (don’t remember the model details) which she had bought just a few weeks/months before debconf. We tried a few times but it failed at installing the boot-loader stage. I asked Ritesh Raj Saraff (a friend and DD) and while he had some ideas, none of them worked. Ritesh later pointed out Steve McIntyre and shared he is part of the Debian-Installer team. At that point in time, I had no clue who Steve McIntyre was otherwise I probably would not have approached him. He quickly acquiesced to my request and shared that he would be there for the workshop. With load of my mind little bit, I apologized to mensah and asked her to be at the workshop the following day. I had no clue what was wrong at this point in time, whether it was the iso image in the usb disk or a UEFI issue. This also wasn’t good for my confidence but as somebody from the Debian-Installer team was there, I was somewhat relaxed.

Next day, some more people came for the Installfest. While I had made 2-3 copies, clearly it was not enough as more people came. I was in a frenzy and asked Deven Bansod, Keerthana Krishnan, Prabaharan Jaminy (the whole GSOC and Outreachy attendees) to volunteer to help out in making more iso images on usb disks. I introduced mensah to Steve McIntyre and we tried 2-3 times to get debian installed on the system but it didn’t move from the same place. Ritesh shared that dd had a memory leak and hence cat was a better way to do it. So we did –

$ cat debian.iso > /dev/sdb and soon other machines had debian sporting on their desktops.

But mensa’s lappy wouldn’t get move from the boot-loader stage. Suddenly Steve had the bright idea (light bulb moment) that maybe the .iso is corrupted/usb disk is bad or something is incomplete. We started on another usb disk.

Now this is where I have a query – While I don’t want to compare, in Ubuntu there was an image self-checking mechanism where probably behind the scenes (backend) the checksums published in a file are compared with checksums generated by apps. which are on the .iso image. While it does extend your time, the end result is you know if there is some issue on the decompressed image on the usb disk. AFAIK we don’t have anything similar. The only two things I know is the wiki page and of course the various checksums of the image as shared at http://cdimage.debian.org/debian-cd/8.6.0/amd64/iso-cd/ or http://cdimage.debian.org/debian-cd/8.6.0/amd64/iso-dvd/

If anybody knows of any movement or a bug in the BTS which I can follow for the above issue please let me know.

This time Steve was able to install it without any issues. I asked him whether he had to make some specific FAT/Ex-FAT/NTFS partitions as some new UEFI-based lappies need one or more but he replied in the negative. While mensa did get her debian install, the GUI didn’t come while command-prompt was available. Then Steve added backports to the sources.list, got the new kernel, new Intel/Nvidia drivers (think it was one of those hybrid models IIRC) and she was able to boot into GNOME-Debian.

I didn’t saw any bug-reports about checksumming state of the applications before installation but did couple of reports about badblocks support and memory checking and from action on both bug-reports it is also need of the hour (although the earlier one has been marked as won’t fix :().

In this whole thing, I liked/appreciated the way Steve handled things, I intuitively understood/knew that he wasn’t just part of the Debian-installer team but someone better. I can’t explain it but it was there. A little investigation in the evening and it turned out that he had been Debian Project Leader for two consecutive years (2008 and 2009) . In hindsight it probably was a good thing I didn’t know that before otherwise I probably wouldn’t have interacted with him and it would have been my loss. To have been the DPL and still being so humble while technically being so proficient, I was amazed and didn’t know what to make of it.

Here i.e. in India, if somebody wins even the mohalla elections (neighbourhood elections) the person carries a big chip on her/is shoulder not just till he is on the seat but even beyond, and here was an example of a previous DPL asking time from one of the developers in a video if it’s possible in the next couple of days.

Lastly,last week have able to report 2 bugs upstream. The first one is of youtube-dl . It’s somewhat complicated hence will not go there atm. The second and more surprising one was from ‘nano’ our esteemed text-editor- Hopefully the bug will be fixed once a new version comes.


Filed under: Miscellenous Tagged: #buggy, #cop, #Debconf16, #doha airport, #Installfest, #nano, #singer, #youtube-dl, travel

Joey Hess: Linux.Conf.Au 2017 presentation on Propellor

16 November, 2016 - 22:13

On January 18th, I'll be presenting "Type driven configuration management with Propellor" at Linux.Conf.Au in Hobart, Tasmania. Abstract

Linux.Conf.Au is a wonderful conference, and I'm thrilled to be able to attend it again.

Bits from Debian: Debian Contributors Survey 2016

16 November, 2016 - 21:45

The Debian Contributor Survey launched last week!

In order to better understand and document who contributes to Debian, we (Mathieu ONeil, Molly de Blanc, and Stefano Zacchiroli) have created this survey to capture the current state of participation in the Debian Project through the lense of common demographics. We hope a general survey will become an annual effort, and that each year there will also be a focus on a specific aspect of the project or community. The 2016 edition contains sections concerning work, employment, and labour issues in order to learn about who is getting paid to work on and with Debian, and how those relationships affect contributions.

We want to hear from as many Debian contributors as possible—whether you've submitted a bug report, attended a DebConf, reviewed translations, maintain packages, participated in Debian teams, or are a Debian Developer. Completing the survey should take 10-30 minutes, depending on your current involvement with the project and employment status.

In an effort to reflect our own ideals as well as those of the Debian project, we are using LimeSurvey, an entirely free software survey tool, in an instance of it hosted by the LimeSurvey developers.

Survey responses are anonymous, IP and HTTP information are not logged, and all questions are optional. As it is still likely possible to determine who a respondent is based on their answers, results will only be distributed in aggregate form, in a way that does not allow deanonymization. The results of the survey will be analyzed as part of ongoing research work by the organizers. A report discussing the results will be published under a DFSG-free license and distributed to the Debian community as soon as it's ready. The raw, disaggregated answers will not be distributed and will be kept under the responsibility of the organizers.

We hope you will fill out the Debian Contributor Survey. The deadline for participation is: 4 December 2016, at 23:59 UTC.

If you have any questions, don't hesitate to contact us via email at:

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้