Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 1 min ago

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, November 2016

16 December, 2016 - 16:43

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 150 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change this month and in fact we haven’t had any new sponsor since September. We still need a couple of supplementary sponsors to reach our objective of funding the equivalent of a full time position.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file 36. We don’t seem to really catch up the small backlog. The reasons are not clear but I noticed that there are a few packages that take a lot of time due to the number of issues found with fuzzers. We also handle many issues that the security team ends up classifying as not worth an update because we add the package to dla-needed.txt before the security team has done its review and nobody checks afterwards.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Michal Čihař: wlc 0.7

16 December, 2016 - 16:15

wlc 0.7, a command line utility for Weblate, has been just released. There are several new commands like translation file download or statistics fetching.

Full list of changes:

  • Added reset operation.
  • Added statistrics for project.
  • Added changes listing.
  • Added file downloads.

wlc is built on API introduced in Weblate 2.6 and still being in development, you need Weblate 2.10 for some feature (already available on our hosting offering). You can find usage examples in the wlc documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

Steve Kemp: A simple Perl alternative to storing data in Redis

16 December, 2016 - 11:45

I continue to be a big user of Perl, and for many of my sites I avoid the use of MySQL which means that I largely store data in flat files, SQLite databases, or in memory via Redis.

One of my servers was recently struggling with RAM, and the suprising cause was "too much data" in Redis. (Surprising because I'd not been paying attention and seen how popular it was, and also because ASCII text compresses pretty well).

Read/Write speed isn't a real concern, so I figured I'd move the data into an SQLite database, but that would require rewriting the application.

The client library for Perl is pretty awesome, and simple usage looks like this:

# Connect to localhost.
my $r = Redis->new()

# simple storage
$r->set( "key", "value" );

# Work with sets
$r->sadd( "fruits", "orange" );
$r->sadd( "fruits", "apple" );
$r->sadd( "fruits", "blueberry" );
$r->sadd( "fruits", "banannanananananarama" );

# Show the set-count
print "There are " . $r->scard( "fruits" ) . " known fruits";

# Pick a random one
print "Here is a random one " . $r->srandmember( "fruits" ) . "\n";

I figured, if I ignored the Lua support and the other more complex operations, creating a compatible API implementation wouldn't be too hard. So rather than porting my application to using SQLite directly I could juse use a different client-library.

In short I change this:

use Redis;
my $r = Redis->new();

To this:

use Redis::SQLite;
my $r = Redis::SQLite->new();

And everything continues to work. I've implemented all the set-related functions except one, and a random smattering of the other simple operations.

The appropriate test-cases in the Redis client library (i.e. removing all references to things I didn't implement) pass, and my own new tests also make me confident.

It's obviously not a hard job, but it was a quick solution to a real problem and might be useful to others.

My image hosting site, and my markdown sharing site now both use this wrapper and seem to be performing well - but with more free RAM.

No doubt I'll add more of the simple primitives as time goes on, but so far I've done enough to be useful.

Reproducible builds folks: Reproducible Builds: week 85 in Stretch cycle

15 December, 2016 - 21:35

What happened in the Reproducible Builds effort between Sunday December 4 and Saturday December 10 2016:

Toolchain development and fixes

Anders Kaseorg opened a pull request to asciidoc upstream, to make it generate reproducible documentation. (#782294)

Bugs filed

Chris Lamb:

Clint Adams:

Dafydd Harries:

Robbie Harwood:

Valerie R Young:

Reviews of unreproducible packages

47 package reviews have been added, 84 have been updated and 3 have been removed in this week, adding to our knowledge about identified issues.

1 new issue type has been added: lessc_captures_build_path

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (8)
diffoscope development

Chris Lamb fixed a division-by-zero in the progress bar, split out trydiffoscope into a separate package, and made some performance enhancements. Maria Glukhova fixed build issues with Python 3.4

strip-nondeterminism development

Anders Kaseorg added support for .par files, by allowing them to be treated as Zip archives; and Chris Lamb improved some documentation.

reprotest development

Ximin Luo added the ability to vary the build time using faketime, as well as other code quality improvements and cleanups.

He also discovered a little-known fact about faketime - that it also modifies filesystem timestamps by default. He submitted a PR to libfaketime upstream to improve the documentation on this, which was quickly accepted, and also disabled this feature in reprotest's own usage of faketime.

buildinfo.debian.net development

There was further work on buildinfo.debian.net code. Chris Lamb added support for buildinfo format 0.2 and made rejection notices clearer; and Emanuel Bronshtein fixed some links to use HTTPS.

Misc.

This week's edition was written by Ximin Luo and reviewed by a bunch of Reproducible Builds folks on IRC and via email.

Jamie McClelland: Should we be pushing OpenPGP?

15 December, 2016 - 20:56

Bjarni Rúnar, the author of Mailpile released a blog about recent blogs disparaging OpenPGP. It's a good read.

There's one reason to support OpenPGP missing from the blog: OpenPGP protects you if your mail server is hacked. I'm sure that Debbie Wasserman Schultz wishes she had been using OpenPGP.

Having said all of this... OpenPGP didn't make my recent list of security tips. That ommission is for two reasons:

  • I've never trusted my phone enough to store my OpenPGP keys on it. However, now that I am encrypting my data partition on the phone, should I re-consider? I use the K-9 email client which has had OpenPGP support for years, should I recommend that other people use K-9 and upload their keys to their phones? Suggesting that people use OpenPGP without the ability to use it on your phone seems like an empty suggestion. What about OpenPGP on the iPhone?
  • I'm waiting for Mailiple 1.0 to be released so I have a viable suggestion for how people can start using encryption now on their desktops. The complexity of using Thunderbird with Enigmail (and the uncertain future of Thunderbird) make it a hard sell. Should I re-consider? What about Mailvelope? Should I be encouraging people to use Mailvelope with their Gmail, etc. accounts?

Michal Čihař: Weblate 2.10

15 December, 2016 - 20:30

Quite on the schedule, Weblate 2.10 is out today. This release brings Git exporter module, improves support for machine translation services and adds various CSV exports and API interfaces.

Full list of changes:

  • Added quality check to check whether plurals are translated differently.
  • Fixed GitHub hooks for repositories with authentication.
  • Added optional Git exporter module.
  • Support for Microsoft Cognitive Services Translator API.
  • Simplified project and component user interface.
  • Added automatic fix to remove control chars.
  • Added per language overview to project.
  • Added support for CSV export.
  • Added CSV download for stats.
  • Added matrix view for quick overview of all translations
  • Added basic API for changes and units.
  • Added supoort for Apertium APy server for machine translations.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

Carl Chenet: Feed2tweet 0.8, tool to post RSS feeds to Twitter, released

15 December, 2016 - 06:00

Feed2tweet 0.8, a self-hosted Python app to automatically post RSS feeds to the Twitter social network, was released this December, 14th.

With this release Feed2tweet now smartly manages the hashtags, adding as much as possible given the size of the tweet.

Also 2 new options are available :

  • –populate-cache to retrieve the last entries of the RSS feeds and store them in the local cache file without posting them on Twitter
  • –rss-sections to display available sections in the RSS feed, allowing to use these section names in your tweet format (see the official documentation for more details)

Feed2tweet 0.8  is already in production for Le Journal du hacker, a French-speaking Hacker News-like website, LinuxJobs.fr, a French-speaking job board and this very blog.

What’s the purpose of Feed2tweet?

Some online services offer to convert your RSS entries into Twitter posts. Theses services are usually not reliable, slow and don’t respect your privacy. Feed2tweet is Python self-hosted app, the source code is easy to read and you can enjoy the official documentation online with lots of examples.

Twitter Out Of The Browser

Have a look at my Github account for my other Twitter automation tools:

  • Retweet , retweet all (or using some filters) tweets from a Twitter account to another one to spread content.
  • db2twitter, get data from SQL database (several supported), build tweets and send them to Twitter
  • Twitterwatch, monitor the activity of your Twitter timeline and warn you if no new tweet appears

What about you? Do you use tools to automate the management of your Twitter account? Feel free to give me feedback in the comments below.

Antoine Beaupré: Debian considering automated upgrades

15 December, 2016 - 00:00

The Debian project is looking at possibly making automatic minor upgrades to installed packages the default for newly installed systems. While Debian has a reliable and stable package update system that has been an inspiration for multiple operating systems (the venerable APT), upgrades are, usually, a manual process on Debian for most users.

The proposal was brought up during the Debian Cloud sprint in November by longtime Debian Developer Steve McIntyre. The rationale was to make sure that users installing Debian in the cloud have a "secure" experience by default, by installing and configuring the unattended-upgrades package within the images. The unattended-upgrades package contains a Python program that automatically performs any pending upgrade and is designed to run unattended. It is roughly the equivalent of doing apt-get update; apt-get upgrade in a cron job, but has special code to handle error conditions, warn about reboots, and selectively upgrade packages. The package was originally written for Ubuntu by Michael Vogt, a longtime Debian developer and Canonical employee.

Since there was a concern that Debian cloud images would be different from normal Debian installs, McIntyre suggested installing unattended-upgrades by default on all Debian installs, so that people have a consistent experience inside and outside of the cloud. The discussion that followed was interesting as it brought up key issues one would have when deploying automated upgrade tools, outlining both the benefits and downsides to such systems.

Problems with automated upgrades

An issue raised in the following discussion is that automated upgrades may create unscheduled downtime for critical services. For example, certain sites may not be willing to tolerate a master MySQL server rebooting in conditions not controlled by the administrators. The consensus seems to be that experienced administrators will be able to solve this issue on their own, or are already doing so.

For example, Noah Meyerhans, a Debian developer, argued that "any reasonably well managed production host is going to be driven by some kind of configuration management system" where competent administrators can override the defaults. Debian, for example, provides the policy-rc.d mechanism to disable service restarts on certain packages out of the box. unattended-upgrades also features a way to disable upgrades on specific packages that administrators would consider too sensitive to restart automatically and will want to schedule during maintenance windows.

Reboots were another issue discussed: how and when to deploy kernel upgrades? Automating kernel upgrades may mean data loss if the reboot happens during a critical operation. On Debian systems, the kernel upgrade mechanisms already provide a /var/run/reboot-required flag file that tools can monitor to notify users of the required reboot. For example, some desktop environments will popup a warning prompting users to reboot when the file exists. Debian doesn't currently feature an equivalent warning for command-line operation: Vogt suggested that the warning could be shown along with the usual /etc/motd announcement.

The ideal solution here, of course, is reboot-less kernel upgrades, which is also known as "live patching" the kernel. Unfortunately, this area is still in development in the kernel (as was previously discussed here). Canonical deployed the feature for the Ubuntu 16.04 LTS release, but Debian doesn't yet have such capability, since it requires extra infrastructure among other issues.

Furthermore, system reboots are only one part of the problem. Currently, upgrading packages only replaces the code and restarts the primary service shipped with a given package. On library upgrades, however, dependent services may not necessarily notice and will keep running with older, possibly vulnerable, libraries. While libc6, in Debian, has special code to restart dependent services, other libraries like libssl do not notify dependent services that they need to restart to benefit from potentially critical security fixes.

One solution to this is the needrestart package which inspects all running processes and restarts services as necessary. It also covers interpreted code, specifically Ruby, Python, and Perl. In my experience, however, it can take up to a minute to inspect all processes, which degrades the interactivity of the usually satisfying apt-get install process. Nevertheless, it seems like needrestart is a key component of a properly deployed automated upgrade system.

Benefits of automated upgrades

One thing that was less discussed is the actual benefit of automating upgrades. It is merely described as "secure by default" by McIntyre in the proposal, but no one actually expanded on this much. For me, however, it is now obvious that any out-of-date system will be systematically attacked by automated probes and may be taken over to the detriment of the whole internet community, as we are seeing with Internet of Things devices. As Debian Developer Lars Wirzenius said:

The ecosystem-wide security benefits of having Debian systems keep up to date with security updates by default overweigh any inconvenience of having to tweak system configuration on hosts where the automatic updates are problematic.

One could compare automated upgrades with backups: if they are not automated, they do not exist and you will run into trouble without them. (Wirzenius, coincidentally, also works on the Obnam backup software.)

Another benefit that may be less obvious is the acceleration of the feedback loop between developers and users: developers like to know quickly when an update creates a regression. Automation does create the risk of a bad update affecting more users, but this issue is already present, to a lesser extent, with manual updates. And the same solution applies: have a staging area for security upgrades, the same way updates to Debian stable are first proposed before shipping a point release. This doesn't have to be limited to stable security updates either: more adventurous users could follow rolling distributions like Debian testing or unstable with unattended upgrades as well, with all the risks and benefits that implies.

Possible non-issues

That there was not a backlash against the proposal surprised me: I expected the privacy-sensitive Debian community to react negatively to another "phone home" system as it did with the Django proposal. This, however, is different than a phone home system: it merely leaks package lists and one has to leak that information to get the updated packages. Furthermore, privacy-sensitive administrators can use APT over Tor to fetch packages. In addition, the diversity of the mirror infrastructure makes it difficult for a single entity to profile users.

Automated upgrades do imply a culture change, however: administrators approve changes only a posteriori as opposed to deliberately deciding to upgrade parts they chose. I remember a time when I had to maintain proprietary operating systems and was reluctant to enable automated upgrades: such changes could mean degraded functionality or additional spyware. However, this is the free-software world and upgrades generally come with bug fixes and new features, not additional restrictions.

Automating major upgrades?

While automating minor upgrades is one part of the solution to the problem of security maintenance, the other is how to deal with major upgrades. Once a release becomes unsupported, security issues may come up and affect older software. While Debian LTS extends releases lifetimes significantly, it merely delays the inevitable major upgrades. In the grand scheme of things, the lifetimes of Linux systems (Debian: 3-5 years, Ubuntu: 1-5 years) versus other operating systems (Solaris: 10-15 years, Windows: 10+ years) is fairly short, which makes major upgrades especially critical.

While major upgrades are not currently automated in Debian, they are usually pretty simple: edit sources.list then:

    # apt-get update && apt-get dist-upgrade

But the actual upgrade process is really much more complex. If you run into problems with the above commands, you will quickly learn that you should have followed the release notes, a whopping 20,000-word, ten-section document that outlines all the gory details of the release. This is a real issue for large deployments and for users unfamiliar with the command line.

The solutions most administrators seem to use right now is to roll their own automated upgrade process. For example, the Debian.org system administrators have their own process for the "jessie" (8.0) upgrade. I have also written a specification of how major upgrades could be automated that attempts to take into account the wide variety of corner cases that occur during major upgrades, but it is currently at the design stage. Therefore, this problem space is generally unaddressed in Debian: Ubuntu does have a do-release-upgrade command but it is Ubuntu-specific and would need significant changes in order to work in Debian.

Future work

Ubuntu currently defaults to "no automation" but, on install, invites users to enable unattended-upgrades or Landscape, a proprietary system-management service from Canonical. According to Vogt, the company supports both projects equally as they differ in scope: unattended-upgrades just upgrades packages while Landscape aims at maintaining thousands of machines and handles user management, release upgrades, statistics, and aggregation.

It appears that Debian will enable unattended-upgrades on the images built for the cloud by default. For regular installs, the consensus that has emerged points at the Debian installer prompting users to ask if they want to disable the feature as well. One reason why this was not enabled before is that unattended-upgrades had serious bugs in the past that made it less attractive. For example, it would simply fail to follow security updates, a major bug that was fortunately promptly fixed by the maintainer.

In any case, it is important to distribute security and major upgrades on Debian machines in a timely manner. In my long experience in professionally administering Unix server farms, I have found the upgrade work to be a critical but time-consuming part of my work. During that time, I successfully deployed an automated upgrade system all the way back to Debian woody, using the simpler cron-apt. This approach is, unfortunately, a little brittle and non-standard; it doesn't address the need of automating major upgrades, for which I had to revert to tools like cluster-ssh or more specialized configuration management tools like Puppet. I therefore encourage any effort towards improving that process for the whole community.

More information about the configuration of unattended-upgrades can be found in the Ubuntu documentation or the Debian wiki.

Note: this article first appeared in the Linux Weekly News.

Antoine Beaupré: Django debates privacy concern

14 December, 2016 - 23:15

In recent years, privacy issues have become a growing concern among free-software projects and users. As more and more software tasks become web-based, surveillance and tracking of users is also on the rise. While some software may use advertising as a source of revenue, which has the side effect of monitoring users, the Django community recently got into an interesting debate surrounding a proposal to add user tracking—actually developer tracking—to the popular Python web framework.

Tracking for funding

A novel aspect of this debate is that the initiative comes from concerns of the Django Software Foundation (DSF) about funding. The proposal suggests that "relying on the free labor of volunteers is ineffective, unfair, and risky" and states that "the future of Django depends on our ability to fund its development". In fact, the DSF recently hired an engineer to help oversee Django's development, which has been quite successful in helping the project make timely releases with fewer bugs. Various fundraising efforts have resulted in major new Django features, but it is difficult to attract sponsors without some hard data on the usage of Django.

The proposed feature tries to count the number of "unique developers" and gather some metrics of their environments by using Google Analytics (GA) in Django. The actual proposal (DEP 8) is done as a pull request, which is part of Django Enhancement Proposal (DEP) process that is similar in spirit to the Python Enhancement Proposal (PEP) process. DEP 8 was brought forward by a longtime Django developer, Jacob Kaplan-Moss.

The rationale is that "if we had clear data on the extent of Django's usage, it would be much easier to approach organizations for funding". The proposal is essentially about adding code in Django to send a certain set of metrics when "developer" commands are run. The system would be "opt-out", enabled by default unless turned off, although the developer would be warned the first time the phone-home system is used. The proposal notes that an opt-in system "severely undercounts" and is therefore not considered "substantially better than a community survey" that the DSF is already doing.

Information gathered

The pieces of information reported are specifically designed to run only in a developer's environment and not in production. The metrics identified are, at the time of writing:

  • an event category (the developer commands: startproject, startapp, runserver)
  • the HTTP User-Agent string identifying the Django, Python, and OS versions
  • a user-specific unique identifier (a UUID generated on first run)

The proposal mentions the use of the GA aip flag which, according to GA documentation, makes "the IP address of the sender 'anonymized'". It is not quite clear how that is done at Google and, given that it is a proprietary platform, there is no way to verify that claim. The proposal says it means that "we can't see, and Google Analytics doesn't store, your actual IP". But that is not actually what Google does: GA stores IP addresses, the documentation just says they are anonymized, without explaining how.

GA is presented as a trade-off, since "Google's track record indicates that they don't value privacy nearly as high" as the DSF does. The alternative, deploying its own analytics software, was presented as making sustainability problems worse. According to the proposal, Google "can't track Django users. [...] The only thing Google could do would be to lie about anonymizing IP addresses, and attempt to match users based on their IPs".

The truth is that we don't actually know what Google means when it "anonymizes" data: Jannis Leidel, a Django team member, commented that "Google has previously been subjected to secret US court orders and was required to collaborate in mass surveillance conducted by US intelligence services" that limit even Google's capacity of ensuring its users' anonymity. Leidel also argued that the legal framework of the US may not apply elsewhere in the world: "for example the strict German (and by extension EU) privacy laws would exclude the automatic opt-in as a lawful option".

Furthermore, the proposal claims that "if we discovered Google was lying about this, we'd obviously stop using them immediately", but it is unclear exactly how this could be implemented if the software was already deployed. There are also concerns that an implementation could block normal operation, especially in countries (like China) where Google itself may be blocked. Finally, some expressed concerns that the information could constitute a security problem, since it would unduly expose the version number of Django that is running.

In other projects

Django is certainly not the first project to consider implementing analytics to get more information about its users. The proposal is largely inspired by a similar system implemented by the OS X Homebrew package manager, which has its own opt-out analytics.

Other projects embed GA code directly in their web pages. This is apparently the option chosen by the Oscar Django-based ecommerce solution, but that was seen by the DSF as less useful since it would count Django administrators and wasn't seen as useful as counting developers. Wagtail, a Django-based content-management system, was incorrectly identified as using GA directly, as well. It actually uses referrer information to identify installed domains through the version updates checks, with opt-out. Wagtail didn't use GA because the project wanted only minimal data and it was worried about users' reactions.

NPM, the JavaScript package manager, also considered similar tracking extensions. Laurie Voss, the co-founder of NPM, said it decided to completely avoid phoning home, because "users would absolutely hate it". But NPM users are constantly downloading packages to rebuild applications from scratch, so it has more complete usage metrics, which are aggregated and available via a public API. NPM users seem to find this is a "reasonable utility/privacy trade". Some NPM packages do phone home and have seen "very mixed" feedback from users, Voss said.

Eric Holscher, co-founder of Read the Docs, said the project is considering using Sentry for centralized reporting, which is a different idea, but interesting considering Sentry is fully open source. So even though it is a commercial service (as opposed to the closed-source Google Analytics), it may be possible to verify any anonymity claims.

Debian's response

Since Django is shipped with Debian, one concern was the reaction of the distribution to the change. Indeed, "major distros' positions would be very important for public reception" to the feature, another developer stated.

One of the current maintainers of Django in Debian, Raphaël Hertzog, explicitly stated from the start that such a system would "likely be disabled by default in Debian". There were two short discussions on Debian mailing lists where the overall consensus seemed to be that any opt-out tracking code was undesirable in Debian, especially if it was aimed at Google servers.

I have done some research to see what, exactly, was acceptable as a phone-home system in the Debian community. My research has revealed ten distinct bug reports against packages that would unexpectedly connect to the network, most of which were not directly about collecting statistics but more often about checking for new versions. In most cases I found, the feature was disabled. In the case of version checks, it seems right for Debian to disable the feature, because the package cannot upgrade itself: that task is delegated to the package manager. One of those issues was the infamous "OK Google" voice activation binary blog controversy that was previously reported here and has since then been fixed (although other issues remain in Chromium).

I have also found out that there is no clearly defined policy in Debian regarding tracking software. What I have found, however, is that there seems to be a strong consensus in Debian that any tracking is unacceptable. This is, for example, an extract of a policy that was drafted (but never formally adopted) by Ian Jackson, a longtime Debian developer:

Software in Debian should not communicate over the network except: in order to, and as necessary to, perform their function[...]; or for other purposes with explicit permission from the user.

In other words, opt-in only, period. Jackson explained that "when we originally wrote the core of the policy documents, the DFSG [Debian Free Software Guidelines], the SC [Social Contract], and so on, no-one would have considered this behaviour acceptable", which explains why no explicit formal policy has been adopted yet in the Debian project.

One of the concerns with opt-out systems (or even prompts that default to opt-in) was well explained back then by Debian developer Bas Wijnen:

It very much resembles having to click through a license for every package you install. One of the nice things about Debian is that the user doesn't need to worry about such things: Debian makes sure things are fine.

One could argue that Debian has its own tracking systems. For example, by default, Debian will "phone home" through the APT update system (though it only reports the packages requested). However, this is currently not automated by default, although there are plans to do so soon. Furthermore, Debian members do not consider APT as tracking, because it needs to connect to the network to accomplish its primary function. Since there are multiple distributed mirrors (which the user gets to choose when installing), the risk of surveillance and tracking is also greatly reduced.

A better parallel could be drawn with Debian's popcon system, which actually tracks Debian installations, including package lists. But as Barry Warsaw pointed out in that discussion, "popcon is 'opt-in' and [...] the overwhelming majority in Debian is in favour of it in contrast to 'opt-out'". It should be noted that popcon, while opt-in, defaults to "yes" if users click through the install process. [Update: As pointed out in the comments, popcon actually defaults to "no" in Debian.] There are around 200,000 submissions at this time, which are tracked with machine-specific unique identifiers that are submitted daily. Ubuntu, which also uses the popcon software, gets around 2.8 million daily submissions, while Canonical estimates there are 40 million desktop users of Ubuntu. This would mean there is about an order of magnitude more installations than what is reported by popcon.

Policy aside, Warsaw explained that "Debian has a reputation for taking privacy issues very serious and likes to keep it".

Next steps

There are obviously disagreements within the Django project about how to handle this problem. It looks like the phone-home system may end up being implemented as a proxy system "which would allow us to strip IP addresses instead of relying on Google to anonymize them, or to anonymize them ourselves", another Django developer, Aymeric Augustin, said. Augustin also stated that the feature wouldn't "land before Django drops support for Python 2", which is currently estimated to be around 2020. It is unclear, then, how the proposal would resolve the funding issues, considering how long it would take to deploy the change and then collect the information so that it can be used to spur the funding efforts.

It also seems the system may explicitly prompt the user, with an opt-out default, instead of just splashing a warning or privacy agreement without a prompt. As Shai Berger, another Django contributor, stated, "you do not get [those] kind of numbers in community surveys". Berger also made the argument that "we trust the community to give back without being forced to do so"; furthermore:

I don't believe the increase we might get in the number of reports by making it harder to opt-out, can be worth the ill-will generated for people who might feel the reporting was "sneaked" upon them, or even those who feel they were nagged into participation rather than choosing to participate.

Other options may also include gathering metrics in pip or PyPI, which was proposed by Donald Stufft. Leidel also proposed that the system could ask to opt-in only after a few times the commands are called.

It is encouraging to see that a community can discuss such issues without heating up too much and shows great maturity for the Django project. Every free-software project may be confronted with funding and sustainability issues. Django seems to be trying to address this in a transparent way. The project is willing to engage with the whole spectrum of the community, from the top leaders to downstream distributors, including individual developers. This practice should serve as a model, if not of how to do funding or tracking, at least of how to discuss those issues productively.

Everyone seems to agree the point is not to surveil users, but improve the software. As Lars Wirzenius, a Debian developer, commented: "it's a very sad situation if free software projects have to compromise on privacy to get funded". Hopefully, Django will be able to improve its funding without compromising its principles.

Note: this article first appeared in the Linux Weekly News.

Dirk Eddelbuettel: anytime 0.1.2: Another bugfix

14 December, 2016 - 09:00

Another update, now at release 0.1.2, of anytime arrived at CRAN earlier today.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, ... format to either POSIXct or Date objects -- and to do so without requiring a format string.

See the anytime page, or the GitHub README.md for a few examples, or just consider the following illustration:

R> library(anytime)
R> anytime("20161107 202122")   ## all digits
[1] "2016-11-07 20:21:22 CST"
R> utctime("2016Nov07 202122")  ## UTC parse example
[1] "2016-11-07 14:21:22 CST"
R> 

Release 0.1.2 addresses a somewhat bizarre Windows-only bug reported at GitHub in #33 and at StackOverflow. Formats of the %Y-%b-%d form, ie 2016-Dec-12 for today would fail on Windows as the contiguous string was apparently getting split by a routine looking for splits on spaces. Really strange.

Anyway, I switched to using more helper functions from the Boost String Algorithms library, and things are behaving now. An extra shoutout once more to Gábor Csárdi and the R Consortium for the most awesome R-Builder. I was able to test and fix on Windows during the weekend with no access to an actual windows environment.

The NEWS file summarises the release:

Changes in anytime version 0.1.2 (2016-12-13)
  • The (internal) string processing and splitting now uses Boost algorithm functions which avoids a (bizarre) bug on Windows.

  • Test coverage was increased.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Shirish Agarwal: Eagle Encounters, pier Stellenbosch

14 December, 2016 - 06:15

Before starting, have to say hindsight as they say is always 20/20. I was moaning about my 6/7 hour trip few blog posts back but now came to know about the 17.5 hr. flights (17.5x800km/hr=14000 km.) which are happening around me.

I would say I was whining about nothing seeing those flights. I can’t even imagine how people would feel in those flights. Six hours were too much in the tin-can, thankfully though I was in the aisle seat. In 14 hours most people would probably give to Air rage .

I just saw an excellent article on the subject. I also came to know that seat-selection and food on a long-haul flights are a luxury, hence that changes the equation quite a bit as well. So on these facts, it seems Qatar Airways treated me quite well as I was able to use both those options.

Disclaimer – My knowledge about birds/avian is almost non-existent, Hence feel free to correct me if I do go wrong anywhere.

Coming back to earth literally , I will have to share a bit of South Africa as that is part and parcel of what I’m going to share next. Also many of the pictures shared in this particular blog post belong to Keerthana Krishnan who has shared them with me with permission to share it with the rest of the world.

When I was in South Africa, in the first couple of days as well as what little reading of South African History I had read before travelling, had known that the Europeans, specifically the Dutch ruled on South Africa for many years.

What was shared to me in the first day or two that ‘Afrikaans‘ is mostly spoken by Europeans still living in South Africa, some spoken by the coloured people as well. This tied in with the literature I had already read.

The Wikipedia page shares which language is spoken by whom and how the demographics play out if people are interested to know that.

One of the words or part of the word for places we came to know is ‘bosch’ as is used in many a places. ‘Bosch‘ means wood or forest. After this we came to know about many places which were known as ‘somethingbosch’ which signified to us that area is or was a forest.

On the second/third day Chirayu (pictured, extreme left) shared the idea of going to Eagle Encounters. Other people pictured in the picture are yours truly, some of the people from GSOC, Keerthana is in the middle, the driver who took us to Eagle Encounters on the right (pictured extreme right).

It was supposed to be somewhat near, (Spier, Stellenbosch). While I was not able to able to see/figure out where ‘Eagle Encounters’ is on Openstreetmap, somebody named Firefishy added Spier to OSM few years back. So thank you for that Firefishy so I can at least pin-point a closer place.

I didn’t see/know/try to figure out about the place as Chirayu said it’s a ‘zoo’. I wasn’t enthusiastic as much as I had been depressed by most zoos in India, while you do have national reserves/Parks in India where you see animals in their full glory.

I have been lucky to been able to seen Tadoba and Ranthambore National parks and spend some quality time (about a week) to have some idea as to what can/happens in forests and people living in the buffer-zones but those stories are for a different day altogether.

I have to say I do hope to be part of the Ranthambore experience again somewhere in the future, it really is a beautiful place for flora and fauna and fortunately or unfortunately this is the best time apart from spring, as you have the game of mist/fog and animals . North India this time of the year is something to be experienced.

I wasn’t much enthused as zoos in India are claustrophobic for animals and people both. There are small cages and you see and smell the shit/piss of the animals, generally not a good feeling.

Chirayu shared with me the possibility of being able to ride of Segways and range of bicycles which relieved me so that in case we didn’t enjoy the ‘zoo’ we would enjoy the Segway at least and have a good time.

My whole education about what zoos could be was turned around at Eagle Encounters as it seems to be somewhere between a zoo and what I know as national parks where animals roam free.

We purchased the tickets and went in, the first event/happening was ‘Eagle Encounters’ itself.

Our introduction to the place started by two beautiful volunteer/trainers who were in charge of all the birds in the Eagle Encounters vicinity. The introduction started by every one of us who came for the ‘Eagle Encounter’ show to wear a glove and to have/hold one of the pair of bald owls to sit on the glove. That picture is of a family who was part of our show.

Before my turn came, I was a little apprehensive/worried about holding a bald Owl. To my surprise, they were so soft and easy-going, I could hardly feel the weight.

While the trainer/volunteers were constantly feeding them earthworm-bits (I didn’t ask, just guessing) and we were all happy as they along with the visitors were constantly playing and interacting with the birds, sharing with us the life-cycle of the bald Owl. It’s only then I understood why in the Harry Potter Universe, the owl plays such an important part. They seem to be a nice, curious, easy-going, proud creatures which fits perfectly in the HP Universe.

In hind-sight I should have videod the whole experience as the trainer/volunteer showed a battery of owls, eagles, vultures, Hawks (different birds of prey) what have you. I have to confess my knowledge of birds is and was non-existent.

One of the larger birds we saw at the Eagle Encounters show. Some of the birds could be dangerous, especially in the wild.

That was the other Volunteer-Trainer who was showing off the birds. I especially liked the t-shirt she was wearing. The shop at Eagle Encounters had whole lot of them, they were a bit expensive and just not my size

Tidbit – Just a few years ago, it was a shocker to me to know/realize that what commonly goes/known in the country as a parrot by most people is actually a Parakeet. As can be seen in the article linked, they are widely distributed in India.

While I was young, I used to see the rose-ringed parakeets quite a bit around but nowadays due to probably pollution and other factors, they are noticeably less. They are popular as pets in India. I don’t know what Pollito would think about that, don’t think he would think good.

As I cannot differentiate between Hawk, Vulture, Eagle, etc. I would safely say ‘a Bird of Prey’ as that was what he was holding. This photo was taken after the event was over where we all were curious to know about the volunteer/trainer, their day job and what it meant for them to be taking care of these birds.

While I don’t remember the name of the trainer/volunteer, among other things it was shared that the volunteers/trainers aren’t paid enough and they never have enough funds to take care of all the birds who come to them.

Where the picture was shot (both this and earlier) was sort of open-office. If you look closely, you will see that there are names of the birds, for instance, people who loved LOTR would easily see ‘Gandalf’ . that board lists how much food (probably in grams) did the bird eat in a day and week.

While it was not shared, I’m sure there would be a lot of paperwork, studies to get the birds as well as possible. From a computer science perspective, there seemed to be lot of potential for avian and big-data professionals to do lot of computer modelling and analysis and give more insight into the rehabilitation efforts so the process could be more fine-tuned, efficient and economic perhaps.

This is how we saw the majority of the birds. Most of them had a metal/plastic string which was tied to small artificial branches as the one above.

I forgot to share a very important point. Eagle Encounters is not a zoo but a Rehabilitation Centre.

While the cynic/skeptic part of me tried to not feel or see the before and after pictures of the birds bought to the rehabilitation centre, the caring part was moved to see most of the birds being treated with love and affection. From our conversations with the Volunteer-Trainer it emerged that every week they had to turn away lots of birds due to space constraints. It is only the most serious/life-threatening cases for which they could provide care in a sustainable way they would keep.

Some of the birds who were in the cages were large, airy. I wouldn’t say clean as what little I read before as well later is that birds shit enormously so cleaning cages is quite an effort. Most of the cages and near those artificial branches there were placards of people who were sponsoring a bird or two to look after them.

From what was shared, many of the birds who came had been abused in many ways. Some of them had their bones crushed or/and other cruel ways. As I had shared that I had been wonderfully surprised by seeing birds come so close to me and most of my friends, I felt rage about those who had treated the birds in such evil, bad ways.

What was shared with us that while they try to heal the birds as much as possible, it is always suspect how well the birds would survive on their own in nature, hence many of these birds would go to the sponsor or to some other place when they are well.

If you look at the picture closely, maybe look at the higher resolution photo in the gallery, you will see that both the birds have been adopted by two different couples. The birds as the name tag shows are called ‘Secretaries’.

The Secretaries make a typical sound which is similar to the sound made by old typewriters. Just as woodpeckers make Morse Code noises when they are pecking with their beaks on trees, something similar to the sound of keys emitted by Old Remington typewriters when clicked on was done by the Secretaries.

This is one of the birds in one of the few cages. If you see a higher-resolution picture of the earlier picture, the one which has ‘Secretaries’ they are trying to expand the Rehabilitation Centre.

All in all, an excursion which was supposed to be for just an hour, extended to something like 3 odd hours. Keerthana shot more than a 1000 odd pictures while trying to teach/converse in Telugu to some of the birds.

She shot well over 1000 photos which would have filled something like 30 odd traditional photo albums. Jaminy (Keerthana’s partner-in-crime) used her selfie stick to desired effect, taking pictures with most of the birds as one does with celebrities.

I had also taken some but most of them were over-exposed as was new to mobile photography at that time, still am but mostly it works.

That is the lake we discovered/saw after coming back from Eagle Encounters. We had good times.

Lastly, a virtual prize distribution ceremony –

a. Chirayu – A platinum trophy for actually thinking and pitching the place in the first place.

b. Shirish and Deven Bansod – Metal cups for not taking more than 10 minutes to freshen up and be back after hearing the plan to go to Eagle Encounters.

c. All the girls/women – Spoons for actually making it to the day. All the girls took quite sometime to freshen up, otherwise it might have been possible to also experience the Segways, who knows.

All-in-all an enjoyable day spent in being part of ‘Eagle Encounters’ .


Filed under: Miscellenous Tagged: #Birds of Prey, #Debconf16, #Eagle Encounters, #Rehabilitation, #South African History, #Stellenbosch

Martin Pitt: The alphabet and pitti end here: Last day at Canonical

13 December, 2016 - 18:38

I’ve had the pleasure of working on Ubuntu for 12½ years now, and during that time used up an entire Latin alphabet of release names! (Well, A and C are still free, but we used H and W twice, so on average.. ☺ ) This has for sure been the most exciting time in my life with tons of good memories! Very few highlights:

  • Getting some spam mail from a South African multi-millionaire about a GREAT OPPORTUNITY
  • Joining #warthogs (my first IRC experience) and collecting my first bounties for “derooting” Debian (i. e. drop privileges from root daemons and suid binaries)
  • Getting invited to Oxford to meet a bunch of people for which I had absolutely zero proof of existence, and tossing myself into debts for buying a laptop for that occasion
  • Once being there, looking into my fellows’ stern and serious faces and being amazed by their professionalism:
  • The excitement and hype around going public with Warty Warthogs Beta
  • Meeting lots of good folks at many UDSes, with great ideas and lots of enthusiasm, and sometimes “Bags of Death”. Group photo from Ubuntu Down Under:
  • Organizing UDSes without Launchpad or other electronic help:
     
  • Playing “Wish you were Here” with Bill, Tony, Jono, and the other All Stars
  • Seeing bug #1 getting closed, and watching the transformation of Microsoft from being TEH EVIL of the FOSS world to our business partner
  • Getting to know lots of great places around the world. My favourite: luring a few colleagues for a “short walk through San Francisco” but ruining their feet with a 9 hour hike throughout the city, Golden Gate Park and dipping toes into the Pacific.
  • Seeing Ubuntu grow from that crazy idea into one of the main pillars of the free software world
  • ITZ GTK BUG!
  • Getting really excited when Milbank and the Canonical office appeared in the Harry Potter movie
  • Moving between and getting to know many different teams from the inside (security, desktop, OEM, QA, CI, Foundations, Release, SRU, Tech Board, …) to appreciate and understand the value of different perspectives
  • Breaking burning wood boards, making great and silly videos, and team games in the forest (that was La Mola) at various All Hands

But all good things must come to an end — after tossing and turning this idea for a long time, I will leave Canonical at the end of the year. One major reason for me leaving is that after that long time I am simply in need for a “reboot”: I’ve piled up so many little and large things that I can hardly spend one day on developing something new without hopelessly falling behind in responding to pings about fixing low-level stuff, debugging weird things, handholding infrastructure, explaining how things (should) work, do urgent archive/SRU/maintenance tasks, and whatnot (“it’s related to boot, it probably has systemd in the name, let’s hand it to pitti”). I’ve repeatedly tried to rid myself of some of those or at least find someone else to share the load with, but it’s too sticky :-/ So I spent the last few weeks with finishing some lose ends and handing over some of my main responsibilities.

Today is my last day at work, which I spend mostly on unsubscribing from package bugs, leaving Launchpad teams, and catching up with emails and bugs, i. e. “clean up my office desk”. From tomorrow on I’ll enjoy some longer EOY holidays, before starting my new job in January.

I got offered to work on Cockpit, on the product itself and its ties into the Linux plumbing stack (storaged/udisks, systemd, and the like). So from next year on I’ll change my Hat to become Red instead of orange. I’m curious to seeing for myself how that other side of the fence looks like!

This won’t be a personal good-bye. I will continue to see a lot of you Ubuntu folks on FOSDEMs, debconfs, Plumber’s, or on IRC. But certainly much less often, and that’s the part that I regret most — many of you have become close friends, and Canonical feels much more like a family than a company. So, thanks to all lof you for being on that journey with me, and of course a special and big Thank You to Mark Shuttleworth for coming up with this great Ubuntu vision and making all of this possible!

Ben Hutchings: Debian LTS work, November 2016

13 December, 2016 - 12:10

I was assigned 11 hours of work by Freexian's Debian LTS initiative. I worked 9 hours and carry over 2 hours.

In my role as Linux 3.2 stable maintainer, I made a 3.2.84 release with a large number of backported fixes. I then rebased wheezy's linux package on this and made some additional changes to maintain the kernel module ABI. This will probably be released some time in December or January, depending on what security issues turn up.

Jonathan McDowell: No longer a student. Again.

13 December, 2016 - 05:27

(image courtesy of XKCD)

Last week I graduated with a Masters in Legal Science (now taught as an MLaw) from Queen’s University Belfast. I’m pleased to have achieved a Distinction, as well an award for Outstanding Achievement in the Dissertation (which was on the infringement of privacy by private organisations due to state mandated surveillance and retention laws - pretty topical given the unfortunate introduction of the Investigatory Powers Act 2016). However, as previously stated, I had made the decision that I was happier building things, and wanted to return to the world of technology. I talked to a bunch of interesting options, got to various stages in the hiring process with each of them, and happily accepted a role with Titan IC Systems which started at the beginning of September.

Titan have produced a hardware accelerated regular expression processor (hence the XKCD reference); the RXP in its FPGA variant (what I get to play with) can handle pattern matching against 40Gb/s of traffic. Which is kinda interesting, as it lends itself to a whole range of applications from network scanning to data mining to, well, anything where you want to sift through a large amount of data checking against a large number of rules. However it’s brand new technology for me to get up to speed with (plus getting back into a regular working pattern rather than academentia), and the combination of that and spending most of the summer post DebConf wrapping up the dissertation has meant I haven’t had as much time to devote other things as I’d have liked. However I’ve a few side projects at various stages of completion and will try to manage more regular updates.

Kees Cook: security things in Linux v4.9

13 December, 2016 - 02:05

Previously: v4.8.

Here are a bunch of security things I’m excited about in the newly released Linux v4.9:

Latent Entropy GCC plugin

Building on her earlier work to bring GCC plugin support to the Linux kernel, Emese Revfy ported PaX’s Latent Entropy GCC plugin to upstream. This plugin is significantly more complex than the others that have already been ported, and performs extensive instrumentation of functions marked with __latent_entropy. These functions have their branches and loops adjusted to mix random values (selected at build time) into a global entropy gathering variable. Since the branch and loop ordering is very specific to boot conditions, CPU quirks, memory layout, etc, this provides some additional uncertainty to the kernel’s entropy pool. Since the entropy actually gathered is hard to measure, no entropy is “credited”, but rather used to mix the existing pool further. Probably the best place to enable this plugin is on small devices without other strong sources of entropy.

vmapped kernel stack and thread_info relocation on x86

Normally, kernel stacks are mapped together in memory. This meant that attackers could use forms of stack exhaustion (or stack buffer overflows) to reach past the end of a stack and start writing over another process’s stack. This is bad, and one way to stop it is to provide guard pages between stacks, which is provided by vmalloced memory. Andy Lutomirski did a bunch of work to move to vmapped kernel stack via CONFIG_VMAP_STACK on x86_64. Now when writing past the end of the stack, the kernel will immediately fault instead of just continuing to blindly write.

Related to this, the kernel was storing thread_info (which contained sensitive values like addr_limit) at the bottom of the kernel stack, which was an easy target for attackers to hit. Between a combination of explicitly moving targets out of thread_info, removing needless fields, and entirely moving thread_info off the stack, Andy Lutomirski and Linus Torvalds created CONFIG_THREAD_INFO_IN_TASK for x86.

CONFIG_DEBUG_RODATA mandatory on arm64

As recently done for x86, Mark Rutland made CONFIG_DEBUG_RODATA mandatory on arm64. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so there’s no reason to make the protection optional.

random_page() cleanup

Cleaning up the code around the userspace ASLR implementations makes them easier to reason about. This has been happening for things like the recent consolidation on arch_mmap_rnd() for ET_DYN and during the addition of the entropy sysctl. Both uncovered some awkward uses of get_random_int() (or similar) in and around arch_mmap_rnd() (which is used for mmap (and therefore shared library) and PIE ASLR), as well as in randomize_stack_top() (which is used for stack ASLR). Jason Cooper cleaned things up further by doing away with randomize_range() entirely and replacing it with the saner random_page(), making the per-architecture arch_randomize_brk() (responsible for brk ASLR) much easier to understand.

That’s it for now! Let me know if there are other fun things to call attention to in v4.9.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Michal Čihař: Gammu 1.38.0

13 December, 2016 - 00:00

Today Gammu 1.38.0 has been released. Changes in last two testing releases have been stabilized and this is the outcome. You can expect changes in API or SMSD tables as well as some additional features.

Also this is first stable release after several years which comes with Windows binaries. These are built using AppVeyor and will help bring Windows users back to latest versions.

Full list of changes and new features can be found on Gammu 1.38.0 release page.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu | 0 comments

Thomas Lange: Two months full of FAI work

12 December, 2016 - 22:54

The last months were filled with a lot of FAI work. After the release that supports the creation of disk images, I attended the Debian cloud sprint in Seattle. During the sprint, we've looked at some image creation tools and I gave a short demo of the new fai-diskimage command. We've decided to evaluate FAI for creating official Debian cloud images!

We've created the FAI configuration for GCE images during the sprint and Sam has now moved his own setup from vmdebootstrap to FAI report.

A few weeks ago, I gave a FAI training in cologne for a customer. The participants really loved FAI and they will use it for CentOS, OpenSUSE, Debian and Ubuntu because FAI perfectly fits into their needs Giving this training was a pleasure for me because the participants were really good and they've learned everything about FAI.

Later, I wrote some new code for detecting unknown packages names for the command install_packages, which is needed because aptiude does not handle this any more.

I've finally released the new version FAI 5.3.2 and also created new ISO images which are now available at http://fai-project.org/fai-cd/.

FAI cloud

Dirk Eddelbuettel: RcppCCTZ 0.1.0

12 December, 2016 - 19:27

A new version 0.1.0 of RcppCCTZ arrived on CRAN this morning. It brings a number of new or updated things, starting with new upstream code from CCTZ as well as a few new utility functions.

CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. It requires only a proper C++11 compiler and the standard IANA time zone data base which standard Unix, Linux, OS X, ... computers tend to have in /usr/share/zoneinfo. RcppCCTZ connects this library to R by relying on Rcpp.

A nice example is the helloMoon() function (based on an introductory example in the CCTZ documentation) showing the time when Neil Armstrong took a small step, relative to local time in New York and Sydney:

R> library(RcppCCTZ)
R> helloMoon(verbose=TRUE)
1969-07-20 22:56:00 -0400
1969-07-21 12:56:00 +1000
                   New_York                      Sydney 
"1969-07-20 22:56:00 -0400" "1969-07-21 12:56:00 +1000" 
R> 

The new formating and parsing functions are illustrated below with default arguments for format strings and timezones. All this can be customized as usual.

R> example(formatDatetime)

frmtDtR> now <- Sys.time()

frmtDtR> formatDatetime(now)            # current (UTC) time, in full precision RFC3339
[1] "2016-12-12T13:21:03.866711+00:00"

frmtDtR> formatDatetime(now, tgttzstr="America/New_York")  # same but in NY
[1] "2016-12-12T08:21:03.866711-05:00"

frmtDtR> formatDatetime(now + 0:4)     # vectorised
[1] "2016-12-12T13:21:03.866711+00:00" "2016-12-12T13:21:04.866711+00:00" "2016-12-12T13:21:05.866711+00:00"
[4] "2016-12-12T13:21:06.866711+00:00" "2016-12-12T13:21:07.866711+00:00"
R> example(parseDatetime)

prsDttR> ds <- getOption("digits.secs")

prsDttR> options(digits.secs=6) # max value

prsDttR> parseDatetime("2016-12-07 10:11:12",        "%Y-%m-%d %H:%M:%S");   # full seconds
[1] "2016-12-07 04:11:12 CST"

prsDttR> parseDatetime("2016-12-07 10:11:12.123456", "%Y-%m-%d %H:%M:%E*S"); # fractional seconds
[1] "2016-12-07 04:11:12.123456 CST"

prsDttR> parseDatetime("2016-12-07T10:11:12.123456-00:00")  ## default RFC3339 format
[1] "2016-12-07 04:11:12.123456 CST"

prsDttR> now <- trunc(Sys.time())

prsDttR> parseDatetime(formatDatetime(now + 0:4))               # vectorised
[1] "2016-12-12 07:21:17 CST" "2016-12-12 07:21:18 CST" "2016-12-12 07:21:19 CST"
[4] "2016-12-12 07:21:20 CST" "2016-12-12 07:21:21 CST"

prsDttR> options(digits.secs=ds)
R>

Changes in this version are summarized here:

Changes in version 0.1.0 (2016-12-11)
  • Synchronized with CCTZ upstream.

  • New parsing and formating helpers for Datetime vectors

  • New parsing and formating helpers for (two) double vectors representing full std::chrono nanosecond resolutions

  • Updated documentation and examples.

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Michal &#268;iha&#345;: New location for Weblate

12 December, 2016 - 19:00

Today, Weblate got new home. The difference is not that big - it has been moved from my personal GitHub account to WeblateOrg organization.

The main motivation is to have all Weblate related repositories in one location (all others including wlc, Docker or website are already there). The move will also allow to better manage the project in future as having it in separate repositories provides less management options on GitHub than using organization.

In case you have cloned the git repository, please update

git remote set-url origin https://github.com/WeblateOrg/weblate.git

Of course all issue tracker locations have changed as well (I believe the redirect on GitHub will stay as long as I won't fork the repository, so expect it to work at least month). See GitHub documentation on repository moving.

I'm sorry for all the troubles, but I think this is really necessary move.

Filed under: Debian English SUSE Weblate | 0 comments

Paul Tagliamonte: DNSync MAC Addresses

12 December, 2016 - 10:30

I’ve been hacking on a project on and off for my LAN called DNSync. This will take a DNSMasq leases file and sync it to Amazon Route 53.

I’ve added a new feature, which will create A reccords for each MAC address on the LAN.

Since DNSync won’t touch CNAME records, I use CNAME records (manually) to point to the auto-synced A records for services on my LAN (such as my Projector, etc).

Since It’s easy for two machines to have the same name, I’ve decided to add A records for each MAC as well as their client name. They take the fomm of something like ab-cd-ef-ab-cd-ef.by-mac.paultag.house., which is harder to accedentally collide.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้