Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 13 min 15 sec ago

Arturo Borrero González: Netfilter in GSoC 2017

9 March, 2017 - 16:00

Great news! The Netfilter project has been elected by Google to be a mentoring organization in this year Google Summer of Code program. Following the pattern of the last years, Google seems to realise and support the importance of this software project in the Linux ecosystem.

I will be proudly mentoring some student this 2017 year, along with Eric Leblond and of course Pablo Neira.

The focus of the Netfilter project has been in nftables for the last years, and the students joining our community will likely work on the new framework.

For prospective students: there is an ideas document which you must read. The policy in the Netfilter project is to encourage students to send patches before they are elected to join us. Therefore, a good starting point is to subscribe to the mailing lists, download the git code repositories, build by hand the projects (compilation) and look at the bugzilla (registration required).

Due to this type of internships and programs, I believe is interesting to note the ascending involvement of women in the last years. I can remember right now: Ana Rey (@AnaRB), Shivani Bhardwaj (@tuxish), Laura García and Elise Lennion (blog).

On a side note, Debian is not participating in GSoC this year :-(

Thorsten Glaser: Updated Debian packaging example: PHP webapp with dbconfig-common

9 March, 2017 - 04:15

Since I use this as base for other PHP packages like SimKolab, I’ve updated my packaging example with:

  • PHP 7 support (untested, as I need libapache2-mod-php5)
  • tons more utility code for you to use
  • a class autoloader, with example (build time, for now)
  • (at build time) running a PHPUnit testsuite (unless nocheck)

The old features (Apache 2.2 and 2.4 support, dbconfig-common, etc.) are, of course, still there. Support for other webservers could be contributed by you, and I could extend the autoloader to work at runtime (using dpkg triggers) to include dependencies as packaged in other Debian packages. See, nobody needs “composer”! ☻

Feel free to check it out, play around with it, install it, test it, send me improvement patches and feature requests, etc. — it’s here with a mirror at GitHub (since I wrote it myself and the licence is permissive enough anyway).

This posting and the code behind it are sponsored by my employer ⮡ tarent.

Neil McGovern: GNOME ED update – Week 10

9 March, 2017 - 04:02

After quite a bit of work, we finally have the sponsorship brochure produced for GUADEC and GNOME.Asia. Huge thanks to everyone who helped, I’m really pleased with the result. Again, if you or your company are interested in sponsoring us, please drop a mail to!

Food and Games

I like food, and I like games. So this week there was a couple of awesome sneak previews on the upcoming GNOME 2.24 release. Matthias Clasen posted about GNOME Recipies the 1.0 release – tasty snacks are now available directly on the desktop, which means I can also view them when I’m at the back of the house in the kitchen, where the wifi connection is somewhat spotty. Adrien Plazas also posted about GNOME Games – now I can get my retro gaming fix easily.

Signing things

I was sent a package in the post, with lots of blank stickers and a couple of pens. I’ve now signed a load of stickers, and my hand hurts. More details about exactly what this is about soon :)

Antoine Beaupré: An update to GitHub's terms of service

9 March, 2017 - 00:00

On February 28th, GitHub published a brand new version of its Terms of Service (ToS). While the first draft announced earlier in February didn't generate much reaction, the new ToS raised concerns that they may break at least the spirit, if not the letter, of certain free-software licenses. Digging in further reveals that the situation is probably not as dire as some had feared.

The first person to raise the alarm was probably Thorsten Glaser, a Debian developer, who stated that the "new GitHub Terms of Service require removing many Open Source works from it". His concerns are mainly about section D of the document, in particular section D.4 which states:

You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service.

Section D.5 then goes on to say:

[...] You grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality

ToS versus GPL

The concern here is that the ToS bypass the normal provisions of licenses like the GPL. Indeed, copyleft licenses are based on copyright law which forbid users from doing anything with the content unless they comply with the license, which forces, among other things, "share alike" properties. By granting GitHub and its users rights to reproduce content without explicitly respecting the original license, the ToS may allow users to bypass the copyleft nature of the license. Indeed, as Joey Hess, author of git-annex, explained :

The new TOS is potentially very bad for copylefted Free Software. It potentially neuters it entirely, so GPL licensed software hosted on Github has an implicit BSD-like license

Hess has since removed all his content (mostly mirrors) from GitHub.

Others disagree. In a well-reasoned blog post, Debian developer Jonathan McDowell explained the rationale behind the changes:

My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service.

This seems like a fair point to make: GitHub needs to protect its own rights to operate the service. McDowell then goes on to do a detailed rebuttal of the arguments made by Glaser, arguing specifically that section D.5 "does not grant [...] additional rights to reproduce outside of GitHub".

However, specific problems arise when we consider that GitHub is a private corporation that users have no control over. The "Services" defined in the ToS explicitly "refers to the applications, software, products, and services provided by GitHub". The term "Services" is therefore not limited to the current set of services. This loophole may actually give GitHub the right to bypass certain provisions of licenses used on GitHub. As Hess detailed in a later blog post:

If Github tomorrow starts providing say, an App Store service, that necessarily involves distribution of software to others, and they put my software in it, would that be allowed by this or not?

If that hypothetical Github App Store doesn't sell apps, but licenses access to them for money, would that be allowed under this license that they want to my software?

However, when asked on IRC, Bradley M. Kuhn of the Software Freedom Conservancy explained that "ultimately, failure to comply with a copyleft license is a copyright infringement" and that the ToS do outline a process to deal with such infringement. Some lawyers have also publicly expressed their disagreement with Glaser's assessment, with Richard Fontana from Red Hat saying that the analysis is "basically wrong". It all comes down to the intent of the ToS, as Kuhn (who is not a lawyer) explained:

any license can be abused or misused for an intent other than its original intent. It's why it matters to get every little detail right, and I hope Github will do that.

He went even further and said that "we should assume the ambiguity in their ToS as it stands is favorable to Free Software".

The ToS are in effect since February 28th; users "can accept them by clicking the broadcast announcement on your dashboard or by continuing to use GitHub". The immediacy of the change is one of the reasons why certain people are rushing to remove content from GitHub: there are concerns that continuing to use the service may be interpreted as consent to bypass those licenses. Hess even hosted a separate copy of the ToS [PDF] for people to be able to read the document without implicitly consenting. It is, however, unclear how a user should remove their content from the GitHub servers without actually agreeing to the new ToS.


When I read the first draft, I initially thought there would be concerns about the mandatory Contributor License Agreement (CLA) in section D.5 of the draft:

[...] unless there is a Contributor License Agreement to the contrary, whenever you make a contribution to a repository containing notice of a license, you license your contribution under the same terms, and agree that you have the right to license your contribution under those terms.

I was concerned this would establish the controversial practice of forcing CLAs on every GitHub user. I managed to find a post from a lawyer, Kyle E. Mitchell, who commented on the draft and, specifically, on the CLA. He outlined issues with wording and definition problems in that section of the draft. In particular, he noted that "contributor license agreement is not a legal term of art, but an industry term" and "is a bit fuzzy". This was clarified in the final draft, in section D.6, by removing the use of the CLA term and by explicitly mentioning the widely accepted norm for licenses: "inbound=outbound". So it seems that section D.6 is not really a problem: contributors do not need to necessarily delegate copyright ownership (as some CLAs require) when they make a contribution, unless otherwise noted by a repository-specific CLA.

An interesting concern he raised, however, was with how GitHub conducted the drafting process. A blog post announced the change on February 7th with a link to a form to provide feedback until the 21st, with a publishing deadline of February 28th. This gave little time for lawyers and developers to review the document and comment on it. Users then had to basically accept whatever came out of the process as-is.

Unlike every software project hosted on GitHub, the ToS document is not part of a Git repository people can propose changes to or even collaboratively discuss. While Mitchell acknowledges that "GitHub are within their rights to update their terms, within very broad limits, more or less however they like, whenever they like", he sets higher standards for GitHub than for other corporations, considering the community it serves and the spirit it represents. He described the process as:

[...] consistent with the value of CYA, which is real, but not with the output-improving virtues of open process, which is also real, and a great deal more pleasant.

Mitchell also explained that, because of its position, GitHub can have a major impact on the free-software world.

And as the current forum of preference for a great many developers, the knock-on effects of their decisions throw big weight. While GitHub have the wheel—and they’ve certainly earned it for now—they can do real damage.

In particular, there have been some concerns that the ToS change may be an attempt to further the already diminishing adoption of the GPL for free-software projects; on GitHub, the GPL has been surpassed by the MIT license. But Kuhn believes that attitudes at GitHub have begun changing:

GitHub historically had an anti-copyleft culture, which was created in large part by their former and now ousted CEO, Preston-Warner. However, recently, I've seen people at GitHub truly reach out to me and others in the copyleft community to learn more and open their minds. I thus have a hard time believing that there was some anti-copyleft conspiracy in this ToS change.

GitHub response

However, it seems that GitHub has actually been proactive in reaching out to the free software community. Kuhn noted that GitHub contacted the Conservancy to get its advice on the ToS changes. While he still thinks GitHub should fix the ambiguities quickly, he also noted that those issues "impact pretty much any non-trivial Open Source and Free Software license", not just copylefted material. When reached for comments, a GitHub spokesperson said:

While we are confident that these Terms serve the best needs of the community, we take our users' feedback very seriously and we are looking closely at ways to address their concerns.

Regardless, free-software enthusiasts have other concerns than the new ToS if they wish to use GitHub. First and foremost, most of the software running GitHub is proprietary, including the JavaScript served to your web browser. GitHub also created a centralized service out of a decentralized tool (Git). It has become the largest code hosting service in the world after only a few years and may well have become a single point of failure for free software collaboration in a way we have never seen before. Outages and policy changes at GitHub can have a major impact on not only the free-software world, but also the larger computing world that relies on its services for daily operation.

There are now free-software alternatives to GitHub., for example, does not seem to have similar licensing issues in its ToS and GitLab itself is free software, although based on the controversial open core business model. The GitLab hosting service still needs to get better than its grade of "C" in the GNU Ethical Repository Criteria Evaluations (and it is being worked on); other services like GitHub and SourceForge score an "F".

In the end, all this controversy might have been avoided if GitHub was generally more open about the ToS development process and gave more time for feedback and reviews by the community. Terms of service are notorious for being confusing and something of a legal gray area, especially for end users who generally click through without reading them. We should probably applaud the efforts made by GitHub to make its own ToS document more readable and hope that, with time, it will address the community's concerns.

Note: this article first appeared in the Linux Weekly News.

Clint Adams: Oh, little boy, pick up the pieces

8 March, 2017 - 23:06

Chris sat in the window seat in the row behind his parents. Actually he also sat in half of his neighbor’s seat. His neighbor was uncomfortable but said nothing and did not attempt to lower the armrest to try to contain his girth.

His parents were awful human beings: selfish, self-absorbed and controlling. “Chris,” his dad would say, “look out the window!” His dad was the type of officious busybody who would snitch on you at work for not snitching on someone else.

“What?” Chris would reply, after putting down The Handmaid’s Tale and removing one of his earbuds. Then his dad would insist that it was very important that he look out the window to see a very important cloud or glacial landform.

Chris would comply and then return to his book and music.

“Chris,” his mom would say, “you need to review our travel itinerary.” His mom cried herself to sleep when she heard that Nigel Stock died, gave up on ever finding True Love, and resolved to achieve a husband and child instead.

“What?” Chris would reply, after putting down The Handmaid’s Tale and removing one of his earbuds. Then his mom would insist that it was very important that review photos and prose regarding their managed tour package in Costa Rica, because he wouldn’t want to show up there unprepared. Chris would passive-aggressively stare at each page of the packet, then hand it back to his mother.

It was already somewhat clear that due to delays in taking off they would be missing their connecting flight to Costa Rica. About ⅓ of the passengers on the aeroplane were also going to Costa Rica, and were discussing the probable missed connection amongst themselves and with the flight staff.

Chris’s parents were oblivious to all of this, despite being native speakers of English. Additionally, just as they were unaware of what other people were discussing, they imagined that no one else could hear their private family discussions.

Everyone on the plane missed their connecting flights. Chris’s parents continued to be terrible human beings.

Posted on 2017-03-08 Tags: etiamdisco

Petter Reinholdtsen: How does it feel to be wiretapped, when you should be doing the wiretapping...

8 March, 2017 - 17:50

So the new president in the United States of America claim to be surprised to discover that he was wiretapped during the election before he was elected president. He even claim this must be illegal. Well, doh, if it is one thing the confirmations from Snowden documented, it is that the entire population in USA is wiretapped, one way or another. Of course the president candidates were wiretapped, alongside the senators, judges and the rest of the people in USA.

Next, the Federal Bureau of Investigation ask the Department of Justice to go public rejecting the claims that Donald Trump was wiretapped illegally. I fail to see the relevance, given that I am sure the surveillance industry in USA believe they have all the legal backing they need to conduct mass surveillance on the entire world.

There is even the director of the FBI stating that he never saw an order requesting wiretapping of Donald Trump. That is not very surprising, given how the FISA court work, with all its activity being secret. Perhaps he only heard about it?

What I find most sad in this story is how Norwegian journalists present it. In a news reports the other day in the radio from the Norwegian National broadcasting Company (NRK), I heard the journalist claim that 'the FBI denies any wiretapping', while the reality is that 'the FBI denies any illegal wiretapping'. There is a fundamental and important difference, and it make me sad that the journalists are unable to grasp it.

Matthew Garrett: The Internet of Microphones

8 March, 2017 - 08:30
So the CIA has tools to snoop on you via your TV and your Echo is testifying in a murder case and yet people are still buying connected devices with microphones in and why are they doing that the world is on fire surely this is terrible?

You're right that the world is terrible, but this isn't really a contributing factor to it. There's a few reasons why. The first is that there's really not any indication that the CIA and MI5 ever turned this into an actual deployable exploit. The development reports[1] describe a project that still didn't know what would happen to their exploit over firmware updates and a "fake off" mode that left a lit LED which wouldn't be there if the TV were actually off, so there's a potential for failed updates and people noticing that there's something wrong. It's certainly possible that development continued and it was turned into a polished and usable exploit, but it really just comes across as a bunch of nerds wanting to show off a neat demo.

But let's say it did get to the stage of being deployable - there's still not a great deal to worry about. No remote infection mechanism is described, so they'd need to do it locally. If someone is in a position to reflash your TV without you noticing, they're also in a position to, uh, just leave an internet connected microphone of their own. So how would they infect you remotely? TVs don't actually consume a huge amount of untrusted content from arbitrary sources[2], so that's much harder than it sounds and probably not worth it because:


Seriously your phone is like eleven billion times easier to infect than your TV is and you carry it everywhere. If the CIA want to spy on you, they'll do it via your phone. If you're paranoid enough to take the battery out of your phone before certain conversations, don't have those conversations in front of a TV with a microphone in it. But, uh, it's actually worse than that.

These days audio hardware usually consists of a very generic codec containing a bunch of digital→analogue converters, some analogue→digital converters and a bunch of io pins that can basically be wired up in arbitrary ways. Hardcoding the roles of these pins makes board layout more annoying and some people want more inputs than outputs and some people vice versa, so it's not uncommon for it to be possible to reconfigure an input as an output or vice versa. From software.

Anyone who's ever plugged a microphone into a speaker jack probably knows where I'm going with this. An attacker can "turn off" your TV, reconfigure the internal speaker output as an input and listen to you on your "microphoneless" TV. Have a nice day, and stop telling people that putting glue in their laptop microphone is any use unless you're telling them to disconnect the internal speakers as well.

If you're in a situation where you have to worry about an intelligence agency monitoring you, your TV is the least of your concerns - any device with speakers is just as bad. So what about Alexa? The summary here is, again, it's probably easier and more practical to just break your phone - it's probably near you whenever you're using an Echo anyway, and they also get to record you the rest of the time. The Echo platform is very restricted in terms of where it gets data[3], so it'd be incredibly hard to compromise without Amazon's cooperation. Amazon's not going to give their cooperation unless someone turns up with a warrant, and then we're back to you already being screwed enough that you should have got rid of all your electronics way earlier in this process. There are reasons to be worried about always listening devices, but intelligence agencies monitoring you shouldn't generally be one of them.

tl;dr: The CIA probably isn't listening to you through your TV, and if they are then you're almost certainly going to have a bad time anyway.

[1] Which I have obviously not read
[2] I look forward to the first person demonstrating code execution through malformed MPEG over terrestrial broadcast TV
[3] You'd need a vulnerability in its compressed audio codecs, and you'd need to convince the target to install a skill that played content from your servers


Bits from Debian: New Debian Developers and Maintainers (January and February 2017)

8 March, 2017 - 06:30

The following contributors got their Debian Developer accounts in the last two months:

  • Ulrike Uhlig (ulrike)
  • Hanno Wagner (wagner)
  • Jose M Calhariz (calharis)
  • Bastien Roucariès (rouca)

The following contributors were added as Debian Maintainers in the last two months:

  • Dara Adib
  • Félix Sipma
  • Kunal Mehta
  • Valentin Vidic
  • Adrian Alves
  • William Blough
  • Jan Luca Naumann
  • Mohanasundaram Devarajulu
  • Paulo Henrique de Lima Santana
  • Vincent Prat


Daniel Stender: Remotely deploy a WSGI application (as a Debian package) with Ansible

8 March, 2017 - 02:18

This is mini workshop as an introduction into using Ansible for the administration of Debian systems. As an example it’s shown how this configuration management tool can be used to remotely set up a simple WSGI application running on an Apache web server on a Debian installation to make it available on the net. The application which is used as an example is httpbin by Runscope. This is an useful HTTP request service for the development of web software or any other purposes which features a number of specific endpoints that can be used for different testing matters. For example, the address http://<address>/user-agent of httpbin returns the user agent identification of the client program which has been used to query it (that’s taken from the header of the request). There are official instances of this request server running on the net, like the one at WSGI is a widespread standard for programming web application in Python, and httpbin is implemented in Python using the Flask web framework.

The basis of the workshop is a simple base installation of an up-to-date Debian 8 “Jessie” on a demonstration host, and the latest official release of that is 8.7. As a first step, the installation has to be switched over to the “testing” branch of Debian, because the Debian packages of httpbin are comparatively new and are going to be introduced into the “stable” branch of the archive the first time with the upcoming major release number 9 “Stretch”. After that, the Apache packages which are needed to make it available (apache2 and libapache2-mod-wsgi – other web servers of course could be used instead), and which are not part of a base installation, are installed from the archive. The web server then gets launched remotely, and the httpbin package will be also pulled and the service is going to be integrated into Apache to run on that. To achieve that, two configuration files must be deployed on the target system, and a few additional operations are needed to get everything working together. Every step is preconfigured within Ansible so that the whole process could be launched by a single command on the control node, and could be run on a single or a number of comparable target machines automatically and reproducibly.

If a server is needed for trying this workshop out, straightforward cloud server instances are available on the net for example at DigitalOcean, but – let me underline this – there are other cloud providers which offer the same things, too! If it’s needed for experiments or other purposes only for a limited time, low priced “droplets” are available here which are billed by the hour. After being registered, the machine(s) which is/are wanted could be set up easily over the web interface (choose “Debian 8.7” as OS), but there are also command line clients available like doctl (which is not yet available as a Debian package). For the convenient use of a droplet the user should generate a SSH key pair on the local machine, first:

$ ssh-keygen -t rsa -b 4096 -C "" -f ~/.ssh/mykey

The public part of the key ~/.ssh/ then can be uploaded into the user account before the droplet is going to be created, it then could be integrated automatically. There is a good introduction on the whole process available in the excellent tutorial series, here. Ansible then can use the SSH keypair to login into a droplet without the need to type in the password every time. On a cloud server like this carrying a Debian base system, the examples in this workshop can be tried well. Ansible works client-less and doesn’t need to be installed on the remote system but only on the control node, however a Python 2.7 interpreter is needed there (the base system of DigitalOcean includes that).

For that Ansible can do anything on them, remote servers which are going to be controlled must be added to /etc/ansible/hosts. This is a configuration file in the INI format for DNS names and IP addresses. For a flexible organisation of the server inventory it’s possible to group hosts here, IP ranges could be given, and optional variables can be used among other useful things (the default file contains a couple of examples). One or a couple of servers (in Ansible they are called “hosts”) on which something particular is going to be happening (like httpbin is going to be installed) could be added like this (the group name is arbitrary):


Whether Ansible could communicate with the hosts in the group and actually can operate on them can be verified by just pinging them like this:

$ ansible httpbin -m ping -u root --private-key=~/.ssh/mykey | SUCCESS => {
    "changed": false, 
    "ping": "pong"

The command succeeded well, so it appears there isn’t no significant problem regarding this machine. The return value changed:false indicates that there haven’t been any changes on that host as a result of the execution of this command. Next to ping there are several other modules which could be used with the command line tool ansible the same way, and these modules are actually something like the core components of Ansible. The module shell for example can be used to execute shell commands on the remote machine like uname to get some system information returned from the server:

$ ansible httpbin -m shell -a "uname -a" -u root --private-key=~/.ssh/mykey | SUCCESS | rc=0 >>
Linux debian-512mb-fra1-01 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux

In the same way, the module apt could be used to remotely install packages. But with that there’s no major advantage over other software products that offer similar functions, and using those modules on the command line is just the basics of Ansible usage.

Playbooks in Ansible are YAML scripts for the manipulation of the registered hosts in /etc/ansible/hosts. Different tasks can be defined here for successive processing, like a simple playbook for changing the package source from “stable” to “testing” for example goes like this:

 - hosts: httpbin
   - name: remove "jessie" package source
     apt_repository: repo='deb jessie main' state=absent

   - name: add "testing" package source
     apt_repository: repo='deb testing main contrib non-free' state=present

   - name: upgrade packages
     apt: update_cache=yes upgrade=dist

First, like used with the CLI tool ansible above, the targeted host group httpbin is chosen. The default user “root” and the SSH key could be fixed here, too, to spare the need to give them on the command line. Then there are three tasks which are defined to get worked down consecutively: With the module apt_repository the preset package source “jessie” gets removed from /etc/apt/sources.list. Then, a new package source for the “testing” archive gets added to /etc/apt/sources.list.d/ by using the same module (by the way, also provides testing, though). After that, the apt module is used to upgrade the package inventory (it performs apt-get dist-upgrade), after an update of the package cache has been taken place (by running apt-get update)

A playbook like this (the filename is arbitrary but commonly carries the suffix .yml) can be run by the CLI tool ansible-playbook, like this:

$ ansible-playbook httpbin.yml -u root --private-key=~/.ssh/mykey

Ansible then works down the individual “plays” of the tasks on the remote server(s) top-down, and due to a high speed net connection and SSD block device hardware the change of the system to being a Debian Testing base installation only takes around a minute to complete in the cloud. While working, Ansible puts out status reports of the individual operations. If certain changes on the base system have been taken place already like when a playbook gets run through once more, the modules of course sense that and return just the information that the system haven’t been changed because it’s already there what have been wanted to change to. Beyond the basic playbook which is shown here, there are more advanced features like register and when available to bind the execution of a play to the error-free result of a previous one.

The apt module then can be used to install the three needed binary packages one after another:

   - name: install apache2
     apt: pkg=apache2 state=present

   - name: install mod_wsgi
     apt: pkg=libapache2-mod-wsgi state=present

   - name: install httpbin
     apt: pkg=python-httpbin state=present

The Debian packages are configured in a way that the Apache web server gets launched immediately after installation, and the Apache module mod_wsgi is automatically integrated. If that would be otherwise desired, there are Ansible modules available for operating on Apache which can reverse this if necessary. By the way, after the package have been installed the httpbin server can be launched with python -m httpbin.core, but this runs only a mini web server which is not suitable for productive use.

To get httpbin running on the Apache web server two configuration files are needed. They could be set up in the project directory on the control node and then uploaded onto the remote machine with a suitable Ansible module. The file httpbin.wsgi (the name is again arbitrary) contains a single line which is the starter for the WSGI application to run:

from httpbin import app as application

The module copy can be used to deploy that script on the host, while the target folder /var/www/httpbin must be set up before that by the module file. In addition to that, a separate user account like “httpbin” (the name is also arbitrary) is needed to run it, and the module user can set this up. The demonstrational playbook continues, and the plays which are performing these three operation are going like this:

   - name: mkdir /var/www/httpbin
     file: path=/var/www/httpbin state=directory

   - name: set up user "httpbin"
     user: name=httpbin

   - name: copy WSGI starter
     copy: src=httpbin.wsgi dest=/var/www/httpbin/httpbin.wsgi owner=httpbin group=httpbin mode=0644 

Another configuration script httpbin.conf is needed for Apache on the remote server to include the WSGI application httpbin running as a virtual host. It goes like this:

<VirtualHost *>
 WSGIDaemonProcess httpbin user=httpbin group=httpbin threads=5
 WSGIScriptAlias / /var/www/httpbin/httpbin.wsgi

 <Directory /var/www/httpbin>
  WSGIProcessGroup httpbin
  WSGIApplicationGroup %{GLOBAL}
  Order allow,deny
  Allow from all

This file needs to be copied into the folder /etc/apache2/sites-available on the host, which already exists when the apache2 package is installed. The remaining operations which are missing to get anything running together are: The default welcome screen of Apache blocks anything else and should be disabled by Apache’s CLI tool a2dissite. And after that, the new virtual host needs to be activated with the complementary tool a2ensite – both could be run remotely by the module command. Then the Apache server on the remote machine must be restarted to read in the new configuration. You’ve guessed it already, that’s all easy to perform with Ansible:

   - name: deploy configuration script
     copy: src=httpbin.conf dest=/etc/apache2/sites-available owner=root group=root mode=0644

   - name: deactivate default welcome screen
     command: a2dissite 000-default.conf
   - name: activate httpbin virtual host
     command: a2ensite httpbin.conf

   - name: restart Apache
     service: name=apache2 state=restarted 

That’s it. After this playbook has been performed by Ansible on a (or several) freshly set up remote Debian base installation completely, the httpbin request server then is available running on the Apache web server and could be queried from anywhere by a web browser, or for example by curl:

$ curl
  "user-agent": "curl/7.50.1"

With the broad set of Ansible modules and the playbooks a lot of tasks can be accomplished like the example problem which has been explained here. But the range of functions of Ansible is however still even more comprehensive, but to discuss that would have blown the frame of this blog post. For example the playbooks offer more advanced features like event handler, which can be used for recurring operations like the restart of Apache in more extensive projects. And beyond playbooks, templates could be set up in the roles which can behave differently on selected machine groups – Ansible uses Jinja2 as template engine for that. And the scope of functions of the basic modules could be expanded by employing external tools.

To drop a word on why it could be useful to run own instances of the httpbin request server instead of using the official ones which are provided on the net by Runscope could be useful in certain situations: Like some people would prefer to run a private instance for example in the local network instead of querying the one on the internet. Or for some development reasons a couple or even a large number of identical instances might be needed – Ansible is ideal for setting them up. Anyway, the Javascript bindings to the tracking services like Google Analytics in httpbin/templates/trackingscripts.html have been patched out in the Debian package. That could be another reason to prefer to set up an own instance on a Debian server.

Hideki Yamane: ftp, gone.

7 March, 2017 - 21:26 shutting down FTP services, see I may be better to consider it as in Debian, as I said.

Jaldhar Vyas: 7DRL 2017

7 March, 2017 - 12:52

It's time once again for the 7-day Roguelike challenge. This years attempt is entitled "Casket of Deplorables".

Further updates will be posted here.

Dirk Eddelbuettel: RProtoBuf 0.4.9

7 March, 2017 - 08:17

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

The RProtoBuf 0.4.9 release is the fourth and final update this weekend following the request by CRAN to not use package= in .Call() when PACKAGE= is really called for.

Some of the code in RProtoBuf 0.4.9 had this bug; some other entry points had neither (!!). With the ongoing drive to establish proper registration of entry points, a few more issues were coming up, all of which are now addressed. And we had some other unreleased minor cleanup, so this made for a somewhat longer (compared to the other updates this weekend) NEWS list:

Changes in RProtoBuf version 0.4.9 (2017-03-06)
  • A new file init.c was added with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Symbol registration is enabled in useDynLib

  • Several missing PACKAGE= arguments were added to the corresponding .Call invocations

  • Two (internal) C++ functions were renamed with suffix _cpp to disambiguate them from R functions with the same name

  • All of above were part of #26

  • Some editing corrections were made to the introductory vignette (David Kretch in #25)

  • The '' file was updated, and renamed from the older converntion '', along with 'src/Makevars'. (PR #24 fixing #23)

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: RVowpalWabbit 0.0.9

7 March, 2017 - 08:06

The RVowpalWabbit package update is the third of four upgrades requested by CRAN, following RcppSMC 0.1.5 and RcppGSL 0.3.2.

This package being somewhat raw, the change was simple and just meant converting the single entry point to using Rcpp Attributes -- which addressed the original issue in passing.

No new code or features were added.

We should mention that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch.

More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: RcppGSL 0.3.2

6 March, 2017 - 02:39

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

RcppGSL release 0.3.2 is one of several maintenance releases this weekend to fix an issue flagged by CRAN: calls to .Call() sometimes used package= where PACKAGE= was meant. This came up now while the registration mechanism is being reworked.

So RcppGSL was updated too, and we took the opportunity to bring several packaging aspects up to the newest standards, including support for the soon-to-be required registration of routines.

No new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.2 (2017-03-04)
  • In the fastLm function, .Call now uses the correct PACKAGE= argument

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols(); also use .registration=TRUE in useDynLib in NAMESPACE

  • The skeleton configuration for created packages was updated.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: RcppSMC 0.1.5

6 March, 2017 - 02:38

RcppSMC provides Rcpp-based bindings to R for them Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article.

RcppSMC release 0.1.5 is one of several maintenance releases this weekend to fix an issue flagged by CRAN: calls to .Call() sometimes used package= where PACKAGE= was meant. This came up now while the registration mechanism is being reworked.

Hence RcppSMC was updated, and we took the opportunity to bring several packaging aspects up to the newest standards, including support for the soon-to-be required registration of routines.

No new code or features were added. The NEWS file entries follow below:

Changes in RcppSMC version 0.1.5 (2017-03-03)
  • Correct .Call to use PACKAGE= argument

  • DESCRIPTION, NAMESPACE, changes to comply with current R CMD check levels

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Updated .travis.yml file for continuous integration

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Julien Viard de Galbert: Raspberry Pi 3 as desktop computer

5 March, 2017 - 23:25

For about six months I’ve been using a Raspberry Pi 3 as my desktop computer at home.

The overall experience is fine, but I had to do a few adjustments.
First was to use KeePass, the second to compile gcc for cross-compilation (ie use buildroot).


I’m using KeePass + KeeFox to maintain my passwords on the various websites (and avoid reusing the same everywhere).
For this to work on the Raspberry Pi, one need to use mono from Xamarin:

sudo apt-key adv --keyserver hkp:// --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt-get update

sudo apt-get upgrade
sudo apt-get install mono-runtime

The install instruction comes from mono-project and the initial pointer was found on raspberrypi forums, stackoverflow and Benny Michielsen’s blog.
And for some plugin to work I think I had to apt-get install mono-complete.

Compiling gcc

Using the Raspberry Pi 3, I recovered an old project based on buildroot for the raspberry pi 2. And just for building the tool-chain I had a few issues.

First the compilation would stop during mnp compilation:

 /usr/bin/gcc -std=gnu99 -c -DHAVE_CONFIG_H -I. -I.. -D__GMP_WITHIN_GMP -I.. -DOPERATION_divrem_1 -O2 -Wa,--noexecstack tmp-divrem_1.s -fPIC -DPIC -o .libs/divrem_1.o
tmp-divrem_1.s: Assembler messages:
tmp-divrem_1.s:129: Error: selected processor does not support ARM mode `mls r1,r4,r8,r11'
tmp-divrem_1.s:145: Error: selected processor does not support ARM mode `mls r1,r4,r8,r11'
tmp-divrem_1.s:158: Error: selected processor does not support ARM mode `mls r1,r4,r8,r11'
tmp-divrem_1.s:175: Error: selected processor does not support ARM mode `mls r1,r4,r3,r8'
tmp-divrem_1.s:209: Error: selected processor does not support ARM mode `mls r11,r4,r12,r3'

Makefile:768: recipe for target 'divrem_1.lo' failed
make[]: *** [divrem_1.lo] Error 1

I Googled the error and found this post on the Raspberry Pi forum not really helpful…
But I finally found an explanation on Jan Hrach’s page on the subject.
The raspbian distribution is still optimized for the first Raspberry Pi so basically the compiler is limited to the old raspberypi instructions. While I was compiling gcc for a Raspberry Pi 2 so needed the extra ones.

The proposed solution is to basically update raspbian to debian proper.

While this is a neat idea, I still wanted to get some raspbian specific packages (like the kernel) but wanted to be sure that everything else comes from debian. So I did some apt pinning.

First I experienced that pinning is not sufficient so when updating source.list with plain debian Jessie, make sure to add theses lines before the raspbian lines:

# add official debian jessie (real armhf gcc)
deb jessie main contrib non-free
deb-src jessie main

deb jessie/updates main
deb-src jessie/updates main

deb jessie-updates main
deb-src jessie-updates main

Then run the following to get the debian gpg keys, but don’t yet upgrade your system:

apt update
apt install debian-archive-keyring

Now, let’s add the pinning.
First if you were using APT::Default-Release "stable"; in your apt.conf (as I did) remove it. It does not mix well with fine grained pinning we will then implement.

Then, fill your /etc/apt/preferences file with the following:

# Debian
Package: *
Pin: release o=Debian,a=stable,n=jessie
Pin-Priority: 700

# Raspbian
Package: *
Pin: release o=Raspbian,a=stable,n=jessie
Pin-Priority: 600

Package: *
Pin: release o=Raspberry Pi Foundation,a=stable,n=jessie
Pin-Priority: 600

# Mono
Package: *
Pin: release v=7.0,o=Xamarin,a=stable,n=wheezy,l=Xamarin-Stable,c=main
Pin-Priority: 800

Note: You can use apt-cache policy (no parameter) to debug pinning.
The pinning above is mainly based on the origin field of the repositories (o=)
Finally you can upgrade your system:

apt update 
apt-cache policy gcc 
rm /var/cache/apt/archives/* 
apt upgrade 
apt-cache policy gcc

Note: Removing the cache ensure we download the packages from debian as raspbian is using the exact same naming but we now they are not compiled with a real armhf tool-chain.

Second issue with gcc

The build stopped on recipe for target 's-attrtab' failed. There are many references on the web, that one was easy, it ‘just’ need more memory, so I added some swap on the external SSD I was already using to work on buildroot.


That’s it for today, not bad considering my last post was more that 3 years ago…

Thorsten Alteholz: My Debian Activities in February 2017

5 March, 2017 - 23:16

FTP assistant

This month you didn’t hear much of me, as I only marked 97 packages for accept and rejected 17 packages. I only sent one email to maintainers asking questions.

Nevertheless the NEW queue is down to 46 packages at the moment, so my fellows in misery do a really good job :-).

Debian LTS

This was my thirty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 13.00h. During that time I did uploads of

  • [DLA 832-1] bitlbee security update for three CVEs
  • [DLA 837-1] radare2 security update for one CVE
  • [DLA 839-1] tnef security update for four CVEs
  • [DLA 843-1] bind9 security update for one CVE

Thanks again to all the people who complied with my requests to test a package!

I also prepared the Jessie DSA for tnef which resulted in DSA 3798-1.

At the end of the month I did another week of frontdesk work and among other things I filed some bugs against packages from [1].


Other stuff

Reading about openoverlayrouter in the German magazine c’t, I uploaded that software to Debian.

I also uploaded npd6, which helped me to reach github from a IPv6-only-machine.
Further I uploaded pyicloud.

As my DOPOM for this mont I adopted bottlerocket. Though you can’t buy the hardware anymore, there still seem to be some users around.

Vincent Bernat: Netops with Emacs and Org mode

5 March, 2017 - 17:30

Org mode is a package for Emacs to “keep notes, maintain todo lists, planning projects and authoring documents”. It can execute embedded snippets of code and capture the output (through Babel). It’s an invaluable tool for documenting your infrastructure and your operations.

Here are three (relatively) short videos exhibiting Org mode use in the context of network operations. In all of them, I am using my own junos-mode which features the following perks:

  • syntax highlighting for configuration files,
  • commit of configuration snippets to remote devices, and
  • execution of remote commands.

Since some Junos devices can be quite slow, commits and remote executions are done asynchronously1 with the help of a Python helper.

In the first video, I take some notes about configuring BGP add-path feature (RFC 7911). It demonstrates all the available features of junos-mode.

In the second video, I execute a planned operation to enable this feature in production. The document is a modus operandi and contains the configuration to apply and the commands to check if it works as expected. At the end, the document becomes a detailed report of the operation.

In the third video, a cookbook has been prepared to execute some changes. I set some variables and execute the cookbook to apply the change and check the result.

  1. This is a bit of a hack since Babel doesn’t have native support for that. Also have a look at ob-async which is a language-independent implementation of the same idea. 

Shirish Agarwal: To say or not to say

5 March, 2017 - 10:43

For people who are visually differently-abled, the above reads –

“To learn who rules over you, simply find out who you are not allowed to criticize” – Voltaire wrote this either in late 16th century or early 17th century and those words were as apt in those times, as it is in these turbulent times as well.

The below topic requires a bit of maturity, so if you are easily offended, feel free not to read further.

While this week-end I was supposed to share about the recent Science Day celebrations that we did last week –

Would explore it probably next week.

This week the attempt is to share thoughts which had been simmering at the back of my mind for more than 2 weeks or more and whose answers are not clear to me.

My buttons were pressed when Martin f. Kraft shared about a CoC violation and the steps taken therein. While it is easy to say with 20:20 hind-sight to say that the gentleman acted foolishly, I don’t really know the circumstances to pass the judgement so quickly. In reality, while I didn’t understand the ‘joke’ in itself, I have to share some background by way of anecdotes as to why it isn’t so easy for me to give a judgement call.

a. I don’t know the topics chosen by stand-up comedians in other countries, in India, most of the stand-up acts are either about dating or sex or somewhere in-between, which is lovingly given the name ‘Leela’ (dance of life) in Indian mythology. I have been to several such acts over the years at different events, different occasions and 99.99% of the time I would see them dealing with pedophilia, necrophilia and all sorts of deviants in sexuality and people laughing wildly, but couple of times when the comedian shared the term ‘sex’ with people, educated, probably more than a few world-travelled middle to higher-middle class people were shocked into silence. I had seen this not in once but 2-3 times in different environments and was left wondering just couple of years back ‘ Is sex such a bad word that people get easily shocked ? ‘ Then how is it that we have 1.25 billion + people in India. There had to be some people having sex. I don’t think that all 1.25 billion people are test-tube babies.

b. This actually was what lead to my quandary last year when my sharing of ‘My Experience with Debian’ which I had carefully prepared for newbies, seeing seasoned debian people, I knew my lame observations wouldn’t cut ice with them and hence had to share my actual story which involved a bit of porn. I was in two minds whether or not to say it till my eyes caught a t-shirt on which it was said ‘We make porn’ or something to that effect. That helped me share my point.

c. Which brings me to another point, it seems it is becoming increasingly difficult to talk about anything either before apologizing to everyone and not really knowing who will take offence at what and what the repercussions might be. In local sharings, I always start with a blanket apology that if I say something that offends you, please let me know afterwards so I can work on it. As the term goes ‘You can’t please everyone’ and that is what happens. Somebody sooner or later would take offence at something and re-interpret it in ways which I had not thought of.

From the little sharings and interactions I have been part of, I find people take offence at the most innocuous things. For instance, one of the easy routes of not offending anyone is to use self-deprecating humour (or so I thought) either of my race, caste, class or even my issues with weight and each of the above would offend somebody. Charlie Chaplin didn’t have those problems. If somebody is from my caste, I’m portraying the caste in a certain light, a certain slant. If I’m talking about weight issues, then anybody who is like me (fat) feels that the world is laughing at them rather than at me or they will be discriminated against. While I find the last point a bit valid, it leaves with me no tools and no humour. I neither have the observational powers or the skills that Kapil Sharma has and have to be me.

While I have no clue what to do next, I feel the need to also share why humour is important in any sharing.-

a. Break – When any speaker uses humour, the idea is to take a break from a serious topic. It helps to break the monotony of the talk especially if the topic is full of jargon talk and new concepts. A small comedic relief brings the attendees attention back to the topic as it tends to wander in a long monotonous talk.

b. Bridge – Some of the better speakers use one or more humourous anecdote to explain and/or bridge the chasm between two different concepts. Some are able to produce humour on the fly while others like me have to rely on tried and tested methods.

There is one another thing as well, humour is seems to be a mixture of social, cultural and political context and its very easy to have it back-fired upon you.

For instance, I attempted humour on refugees, probably not the best topic to try humour in the current political climate, and predictably, it didn’t go down well. I had to share and explain about Robin Williams slightly dark yet humorous tale in ‘Moscow on the Hudson‘ The film provides comedy and pathos in equal measure. You are left identifying with Vladimir Ivanoff (Robin Williams character) especially in the last scene where he learns of his grand-mother dying and he remembers her and his motherland, Russia and plays a piece on his saxophone as a tribute both to his grand-mother and the motherland. Apparently, in the height of the cold war, if a Russian defected to United States (land of Satan and other such terms used) you couldn’t return to Russia.

The movie, seen some years back left a deep impact on me. For all the shortcomings and ills that India has, even if I could, would and could I be happy anywhere else ? The answers are not so easy. With most NRI’s (Non-Resident Indians) who emigrated for good did it not so much for themselves but for their children. So the children would hopefully have a better upbringing, better facilities, better opportunities than they would have got here.

I talked to more than a few NRI’s and while most of them give standardized answers, talking awhile and couple of beers or their favourite alcohol later, you come across deeply conflicted human beings whose heart is in India and their job, profession and money interests compel them to be in the country where they are serving.

And Indian movies further don’t make it easy for the Indian populace when trying to integrate into a new place. Some of the biggest hits of yesteryear’s were about having the distinct Indian culture in their new country while the message of most countries is integration. I know of friends who are living in Germany who have to struggle through their German in order to be counted as a citizen, the same I guess is true of other countries as well, not just the language but the customs as well. They also probably struggle with learning more than one language and having an amalgamation of values which somehow they and their children have to make sense of.

I was mildly shocked last week to learn that Mishi Choudary had to train people in the U.S. to differentiate between Afghan turban styles of wearing and the Punjabi style of wearing the turban. A simple search on ‘Afghani turban’ and ‘Punjabi turban’ reveals that there are a lot of differences between the two cultures. In fact, the way they talk, the way they walk, there are lots that differentiate the two cultures.

The second shocking video was of an African-American man racially abusing an Indian-American girl. At first, I didn’t believe it till I saw the video on facebook.

My point through all that is it seems humour, that clean, simple exercise which brings a smile to you and uplifts the spirit doesn’t seem to be as easy as it once was.

Comments, suggestions, criticisms all are welcome.

Filed under: Miscellenous Tagged: #Elusive, #Fear, #hind-sight, #Humour, #immigrant, #integration, #Mishi Choudary, #refugee, #Robin Williams, #self-deprecating, #SFLC, #two-minds

Simon Josefsson: GPS on Replicant 6

5 March, 2017 - 00:08

I use Replicant on my main Samsung S3 mobile phone. Replicant is a fully free Android distribution. One consequence of the “fully free” means that some functionality is not working properly, because the hardware requires non-free software. I am in the process of upgrading my main phone to the latest beta builds of Replicant 6. Getting GPS to work on Replicant/S3 is not that difficult. I have made the decision that I am willing to compromise on freedom a bit for my Geocaching hobby. I have written before how to get GPS to work on Replicant 4.0 and GPS on Replicant 4.2. When I upgraded to Wolfgang’s Replicant 6 build back in September 2016, it took some time to figure out how to get GPS to work. I prepared notes on non-free firmware on Replicant 6 which included a section on getting GPS to work. Unfortunately, that method requires that you build your own image and has access to the build tree. Which is not for everyone. This writeup explains how to get GPS to work on a Replicant 6 without building your own image. Wolfgang already explained how to add all other non-free firmware to Replicant 6 but it did not cover GPS. The reason is that GPS requires non-free software to run on your main CPU. You should understand the consequences of this before proceeding!

The first step is to download a Replicant 6.0 image, currently they are available from the replicant 6.0 forum thread. Download the file and flash it to your phone as usual. Make sure everything (except GPS of course) works, after loading other non-free firmware (Wifi, Bluetooth etc) using "./ i9300 all" that you may want. You can install the Geocaching client c:geo via fdroid by adding as a separate repository. Start the app and verify that GPS does not work. Keep the file around, you will need it later.

The tricky part about GPS is that the daemon is started through the init system of Android, specified by the file / Replicant ships with the GPS part commented out. To modify this file, we need to bring out our little toolbox. Modifying the file on the device itself will not work, the root filesystem is extracted from a ramdisk file on every boot. Any changes made to the file will not be persistent. The file / is stored in the boot.img ramdisk, and that is the file we need to modify to make a persistent modification.

First we need the unpackbootimg and mkbootimg tools. If you are lucky, you might find them pre-built for your operating system. I am using Debian and I couldn’t find them easily. Building them from scratch is however not that difficult. Assuming you have a normal build environment (i.e., apt-get install build-essentials) try the following to build the tools. I was inspired by a post on unpacking and editing boot.img for some of the following instructions.

git clone
cd android_system_core/
git checkout cm-13.0 
cd mkbootimg/
gcc -o ./mkbootimg -I ../include ../libmincrypt/*.c ./mkbootimg.c
gcc -o ./unpackbootimg -I ../include ../libmincrypt/*.c ./unpackbootimg.c
sudo cp mkbootimg unpackbootimg /usr/local/bin/

You are now ready to unpack the boot.img file. You will need the replicant ZIP file in your home directory. Also download the small patch I made for the file: Save the patch as replicant-6-gps-fix.diff in your home directory.

mkdir t
cd t
unzip ~/ 
unpackbootimg -i ./boot.img
mkdir ./ramdisk
cd ./ramdisk/
gzip -dc ../boot.img-ramdisk.gz | cpio -imd
patch < ~/replicant-6-gps-fix.diff 

Assuming the patch applied correctly (you should see output like "patching file" at the end) you will now need to put the ramdisk back together.

find . ! -name . | LC_ALL=C sort | cpio -o -H newc -R root:root | gzip > ../new-boot.img-ramdisk.gz
cd ..
mkbootimg --kernel ./boot.img-zImage \
--ramdisk ./new-boot.img-ramdisk.gz \
--second ./boot.img-second \
--cmdline "$(cat ./boot.img-cmdline)" \
--base "$(cat ./boot.img-base)" \
--pagesize "$(cat ./boot.img-pagesize)" \
--dt ./boot.img-dt \
--ramdisk_offset "$(cat ./boot.img-ramdisk_offset)" \
--second_offset "$(cat ./boot.img-second_offset)" \
--tags_offset "$(cat ./boot.img-tags_offset)" \
--output ./new-boot.img

Reboot your phone to the bootloader:

adb reboot bootloader

Then flash the new boot image back to your phone:

heimdall flash --BOOT new-boot.img

The phone will restart. To finalize things, you need the non-free GPS software components glgps, and gps.cer. Before I used a complicated method involving to extract these files from a CyanogenMod 13.x archive. Fortunately, Lineage OS is now offering downloads containing the relevant files too. You will need to download some files, extract them, and load them onto your phone.

mkdir lineage
cd lineage
unzip ../
adb root
adb wait-for-device
adb remount
adb push system/bin/glgps /system/bin/
adb push system/lib/hw/ /system/lib/hw/
adb push system/bin/gps.cer /system/bin/

Now reboot your phone and start c:geo and it should find some satellites. Congratulations!


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้