Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 7 min ago

Andreas Metzler: Testing gnutls28 3.3.8-6+deb8u5 (stable)

6 February, 2017 - 00:34

I am in the process of trying to fix CVE-2017-533[4567] for jessie and have created a preliminary candidate for uploading to stable. Source (and amd64 binaries for the trusting) is available here and also on the gnutls28_jessie branch in GIT.

I would appreciate testing and feedback to gnutls28 at packages.debian.org. Thanks!

Vincent Bernat: A Makefile for your Go project

5 February, 2017 - 20:28

My most loathed feature of Go is the mandatory use of GOPATH: I do not want to put my own code next to its dependencies. Hopefully, this issue is slowly starting to be accepted by the main authors. In the meantime, you can workaround this problem with more opinionated tools (like gb) or by crafting your own Makefile.

For the later, you can have a look at Filippo Valsorda’s example or my own take which I describe in more details here. This is not meant to be an universal Makefile but a relatively short one with some batteries included. It comes with a simple “Hello World!” application.

Project structure§

For a standalone project, vendoring is a must-have1 as you cannot rely on your dependencies to not introduce backward-incompatible changes. Some packages are using versioned URLs but most of them aren’t. There is currently no standard tool to handle vendoring. My personal take is to vendor all dependencies with Glide.

It is a good practice to split an application into different packages while the main one stay fairly small. In the hellogopher example, the CLI is handled in the cmd package while the application logic for printing greetings is in the hello package:

.
├── cmd/
│   ├── hello.go
│   ├── root.go
│   └── version.go
├── glide.lock (generated)
├── glide.yaml
├── vendor/ (dependencies will go there)
├── hello/
│   ├── root.go
│   └── root_test.go
├── main.go
├── Makefile
└── README.md
Down the rabbit hole§

Let’s take a look at the various “features” of the Makefile.

GOPATH handling§

Since all dependencies are vendored, only our own project needs to be in the GOPATH:

PACKAGE  = hellogopher
GOPATH   = $(CURDIR)/.gopath
BASE     = $(GOPATH)/src/$(PACKAGE)

$(BASE):
    @mkdir -p $(dir $@)
    @ln -sf $(CURDIR) $@

The base import path is hellogopher, not github.com/vincentbernat/hellogopher: this shortens imports and makes them easily distinguishable from imports of dependency packages. However, your application won’t be go get-able. This is a personal choice and can be adjusted with the $(PACKAGE) variable.

We just create a symlink from .gopath/src/hellogopher to our root directory. The GOPATH environment variable is automatically exported to the shell commands of the recipes. Any tool should work fine after changing the current directory to $(BASE). For example, this snippet builds the executable:

.PHONY: all
all: | $(BASE)
    cd $(BASE) && $(GO) build -o bin/$(PACKAGE) main.go
Vendoring dependencies§

Glide is a bit like Ruby’s Bundler. In glide.yaml, you specify what packages you need and the constraints you want on them. Glide computes a glide.lock file containing the exact versions for each dependencies (including recursive dependencies) and download them in the vendor/ folder. I choose to check into the VCS both glide.yaml and glide.lock files. It’s also possible to only check in the first one or to also check in the vendor/ directory. A work-in-progress is currently ongoing to provide a standard dependency management tool with a similar workflow.

We define two rules2:

GLIDE = glide

glide.lock: glide.yaml | $(BASE)
    cd $(BASE) && $(GLIDE) update
    @touch $@
vendor: glide.lock | $(BASE)
    cd $(BASE) && $(GLIDE) --quiet install
    @ln -sf . vendor/src
    @touch $@

We use a variable to invoke glide. This enables a user to easily override it (for example, with make GLIDE=$GOPATH/bin/glide).

Using third-party tools§

Most projects need some third-party tools. We can either expect them to be already installed or compile them in our private GOPATH. For example, here is the lint rule:

BIN    = $(GOPATH)/bin
GOLINT = $(BIN)/golint

$(BIN)/golint: | $(BASE) # ❶
    go get github.com/golang/lint/golint

.PHONY: lint
lint: vendor | $(BASE) $(GOLINT) # ❷
    @cd $(BASE) && ret=0 && for pkg in $(PKGS); do \
        test -z "$$($(GOLINT) $$pkg | tee /dev/stderr)" || ret=1 ; \
     done ; exit $$ret

As for glide, we let the user a chance to override which golint executable to use. By default, it uses a private copy. But a user can use its own copy with make GOLINT=/usr/bin/golint.

In ❶, we have the recipe to build the private copy. We simply issue go get3 to download and build golint. In ❷, the lint rule executes golint on each package contained in the $(PKGS) variable. We’ll explain this variable in the next section.

Working with non-vendored packages only§

Some commands need to be provided with a list of packages. Because we use a vendor/ directory, the shortcut ./... is not what we expect as we don’t want to run tests on our dependencies4. Therefore, we compose a list of packages we care about:

PKGS = $(or $(PKG), $(shell cd $(BASE) && \
    env GOPATH=$(GOPATH) $(GO) list ./... | grep -v "^$(PACKAGE)/vendor/"))

If the user has provided the $(PKG) variable, we use it. For example, if they want to lint only the cmd package, they can invoke make lint PKG=hellogopher/cmd which is more intuitive than specifying PKGS.

Otherwise, we just execute go list ./... but we remove anything from the vendor directory.

Tests§

Here are some rules to run tests:

TIMEOUT = 20
TEST_TARGETS := test-default test-bench test-short test-verbose test-race
.PHONY: $(TEST_TARGETS) check test tests
test-bench:   ARGS=-run=__absolutelynothing__ -bench=.
test-short:   ARGS=-short
test-verbose: ARGS=-v
test-race:    ARGS=-race
$(TEST_TARGETS): test

check test tests: fmt lint vendor | $(BASE)
    @cd $(BASE) && $(GO) test -timeout $(TIMEOUT)s $(ARGS) $(PKGS)

A user can invoke tests in different ways:

  • make test runs all tests;
  • make test TIMEOUT=10 runs all tests with a timeout of 10 seconds;
  • make test PKG=hellogopher/cmd only runs tests for the cmd package;
  • make test ARGS="-v -short" runs tests with the specified arguments;
  • make test-race runs tests with race detector enabled.
Tests coverage§

go test includes a test coverage tool. Unfortunately, it only handles one package at a time and you have to explicitely list the packages to be instrumented, otherwise the instrumentation is limited to the currently tested package. If you provide too many packages, the compilation time will skyrocket. Moreover, if you want an output compatible with Jenkins, you’ll need some additional tools.

COVERAGE_MODE    = atomic
COVERAGE_PROFILE = $(COVERAGE_DIR)/profile.out
COVERAGE_XML     = $(COVERAGE_DIR)/coverage.xml
COVERAGE_HTML    = $(COVERAGE_DIR)/index.html

.PHONY: test-coverage test-coverage-tools
test-coverage-tools: | $(GOCOVMERGE) $(GOCOV) $(GOCOVXML) # ❸
test-coverage: COVERAGE_DIR := $(CURDIR)/test/coverage.$(shell date -Iseconds)
test-coverage: fmt lint vendor test-coverage-tools | $(BASE)
    @mkdir -p $(COVERAGE_DIR)/coverage
    @cd $(BASE) && for pkg in $(PKGS); do \ # ❹
        $(GO) test \
            -coverpkg=$$($(GO) list -f '{{ join .Deps "\n" }}' $$pkg | \
                    grep '^$(PACKAGE)/' | grep -v '^$(PACKAGE)/vendor/' | \
                    tr '\n' ',')$$pkg \
            -covermode=$(COVERAGE_MODE) \
            -coverprofile="$(COVERAGE_DIR)/coverage/`echo $$pkg | tr "/" "-"`.cover" $$pkg ;\
     done
    @$(GOCOVMERGE) $(COVERAGE_DIR)/coverage/*.cover > $(COVERAGE_PROFILE)
    @$(GO) tool cover -html=$(COVERAGE_PROFILE) -o $(COVERAGE_HTML)
    @$(GOCOV) convert $(COVERAGE_PROFILE) | $(GOCOVXML) > $(COVERAGE_XML)

First, we define some variables to let the user override them. We also require the following tools (in ❸):

  • gocovmerge merges profiles from different runs into a single one;
  • gocov-xml converts a coverage profile to the Cobertura format;
  • gocov is needed to convert a coverage profile to a format handled by gocov-xml.

The rules to build those tools are similar to the rule for golint described a few sections ago.

In ❹, for each package to test, we run go test with the -coverprofile argument. We also explicitely provide the list of packages to instrument to -coverpkg by using go list to get a list of dependencies for the tested package and keeping only our owns.

Final result§

While the main goal of using a Makefile was to work around GOPATH, it’s also a good place to hide the complexity of some operations, notably around test coverage.

The excerpts provided in this post are a bit simplified. Have a look at the final result for more perks!

  1. In Go, “vendoring” is about both bundling and dependency management. As the Go ecosystem matures, the bundling part (fixed snapshots of dependencies) may become optional but the vendor/ directory may stay for dependency management (retrieval of the latest versions of dependencies matching a set of constraints). 

  2. If you don’t want to automatically update glide.lock when a change is detected in glide.yaml, rename the target to deps-update and make it a phony target. 

  3. There is some irony for bad mouthing go get and then immediately use it because it is convenient. 

  4. I think ./... should not include the vendor/ directory by default. Dependencies should be trusted to have run their own tests in the environment they expect them to succeed. Unfortunately, this is unlikely to change

Francois Marier: IPv6 and OpenVPN on Linode Debian/Ubuntu VPS

5 February, 2017 - 20:20

Here is how I managed to extend my OpenVPN setup on my Linode VPS to include IPv6 traffic. This ensures that clients can route all of their traffic through the VPN and avoid leaking IPv6 traffic, for example. It also enables clients on IPv4-only networks to receive a routable IPv6 address and connect to IPv6-only servers (i.e. running your own IPv6 broker).

Request an additional IPv6 block

The first thing you need to do is get a new IPv6 address block (or "pool" as Linode calls it) from which you can allocate a single address to each VPN client that connects to the server.

If you are using a Linode VPS, there are instructions on how to request a new IPv6 pool. Note that you need to get an address block between /64 and /112. A /116 like Linode offers won't work in OpenVPN. Thankfully, Linode is happy to allocate you an extra /64 for free.

Setup the new IPv6 address

If your server only has an single IPv4 address and a single IPv6 address, then a simple DHCP-backed network configuration will work fine. To add the second IPv6 block on the other hand, I had to change my network configuration (/etc/network/interfaces) to this:

auto lo
iface lo inet loopback

allow-hotplug eth0
iface eth0 inet dhcp
    pre-up iptables-restore /etc/network/iptables.up.rules

iface eth0 inet6 static
    address 2600:3c01::xxxx:xxxx:xxxx:939f/64
    gateway fe80::1
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

iface tun0 inet6 static
    address 2600:3c01:xxxx:xxxx::/64
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

where 2600:3c01::xxxx:xxxx:xxxx:939f/64 (bound to eth0) is your main IPv6 address and 2600:3c01:xxxx:xxxx::/64 (bound to tun0) is the new block you requested.

Once you've setup the new IPv6 block, test it from another IPv6-enabled host using:

ping6 2600:3c01:xxxx:xxxx::1
OpenVPN configuration

The only thing I had to change in my OpenVPN configuration (/etc/openvpn/server.conf) was to change:

proto udp

to:

proto udp6

in order to make the VPN server available over both IPv4 and IPv6, and to add the following lines:

server-ipv6 2600:3c01:xxxx:xxxx::/64
push "route-ipv6 2000::/3"

to bind to the right V6 address and to tell clients to tunnel all V6 Internet traffic through the VPN.

In addition to updating the OpenVPN config, you will need to add the following line to /etc/sysctl.d/openvpn.conf:

net.ipv6.conf.all.forwarding=1

and the following to your firewall (e.g. /etc/network/ip6tables.up.rules):

# openvpn
-A INPUT -p udp --dport 1194 -j ACCEPT
-A FORWARD -m state --state NEW -i tun0 -o eth0 -s 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state NEW -i eth0 -o tun0 -d 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

in order to ensure that IPv6 packets are forwarded from the eth0 network interface to tun0 on the VPN server.

With all of this done, apply the settings by running:

sysctl -p /etc/sysctl.d/openvpn.conf
ip6tables-apply
systemctl restart openvpn.service
Testing the connection

Now connect to the VPN using your desktop client and check that the default IPv6 route is set correctly using ip -6 route.

Then you can ping the server's new IP address:

ping6 2600:3c01:xxxx:xxxx::1

and from the server, you can ping the client's IP (which you can see in the network settings):

ping6 2600:3c01:xxxx:xxxx::1002

Once both ends of the tunnel can talk to each other, you can try pinging an IPv6-only server from your client:

ping6 ipv6.google.com

and then pinging your client from an IPv6-enabled host somewhere:

ping6 2600:3c01:xxxx:xxxx::1002

If that works, other online tests should also work.

Bits from Debian: Debian welcomes its Outreachy interns

5 February, 2017 - 18:00

Better late than never, we'd like to welcome our three Outreachy interns for this round, lasting from the 6th of December 2016 to the 6th of March 2017.

Elizabeth Ferdman is working in the Clean Room for PGP and X.509 (PKI) Key Management.

Maria Glukhova is working in Reproducible builds for Debian and free software.

Urvika Gola is working in improving voice, video and chat communication with free software.

From the official website: Outreachy helps people from groups underrepresented in free and open source software get involved. We provide a supportive community for beginning to contribute any time throughout the year and offer focused internship opportunities twice a year with a number of free software organizations.

The Outreachy program is possible in Debian thanks to the effort of Debian developers and contributors that dedicate part of their free time to mentor students and outreach tasks, and the help of the Software Freedom Conservancy, who provides administrative support for Outreachy, as well as the continued support of Debian's donors, who provide funding for the internships.

Debian will also participate in the next round for Outreachy, during the summer of 2017. More details will follow in the next weeks.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

Congratulations, Elizabeth, Maria and Urvika!

Shirish Agarwal: Flights of fancy – how to figure trends for Airline tickets.

5 February, 2017 - 15:28

Google Flights Fees tracking between PNQ and YUL and back, economy fares.

Couple of weeks ago, either on some mailing list, on IRC or somewhere else, somebody mentioned that people always put higher amount of airline expenditure for self when asking for sponsorship.

Now last year, between sending the application and getting the approval for sponsorship, there was 3 months of difference between the two. Now if you put up an application for sponsorship like I did last year, I had added 10% to the cost of flight tickets of the cheapest prevailing prices at that point in time on skyscanner or any of the meta-search-engines were showing me at that point in time.

I was sceptical whether the amount that I had put for the to and fro tickets would be enough or not. Strangely, I was lucky enough to get my ticket around the new estimated price. I would have to mention though that there were only 2 tickets left at the new price and if I had waited just a few hours more, those tickets would have gone too and all other tickets were around 25% more than before. The only reasons I could fathom are –

a. Luck, pure and simple.

b. Going at the end of the tourism season – This was evident as I was able to book my extended stay at any hostel just 2 days before my stay at UCT (University of Cape Town) was over. Was corroborated by hostel staff, shop-owners as well as whatever info. I found on the web before and during my stay in SA.

c. South Africa being more lenient than probably Canada is giving and processing visas.

While looking at the third point, thought I better check world tourism rankings and saw the Wikipedia page for it. Interestingly, South Africa seems to have a slight edge over Canada when it comes to statistics and hence strengthens my assumptions that probably more people apply for SA than Canada as they know the possibility of more people making through visa processing. It would stand to be logical that more people would apply for a tourist or similar short-term resident visas if they know they have a good chance going through.

While researching on the topic I also came across/ hunted to find the hardest places to get a Visa for and was surprised to find India being lasted therein.Coincidentally, that site also has a UK domain.It does burst the bubble in ‘Incredible India’ a little bit.

As a newbie who had no clue I knew I was probably a victim of Information Asymmetry where the airlines have much more information about travel trends, ongoing trends at Airports, Politics and Economics of Countries, Price of Crude Oil, Profit, Competition and probably many more factors that I haven’t taken into account which decide fares.

While researching on the topic, one of the interesting finds I had while trying to figure the above is that Airlines didn’t pass on fuel savings to their customers. Now I don’t know whether this was the same around the world or was this only in UK. I am shocked that British (and by definition EU, as UK was part of EU at that time) travellers or consumer groups didn’t file a suit in the court of law as reading the above smells of anticompetitive behaviour. The most shocking statement was this –

“Average fares to Spain rose by 10 per cent over the same period.” – Telegraph, UK.

In order to lessen this information asymmetry a bit, I used google flights and its data of the past 2 months to see how the fares have been changing to have some insight of where the fares might end up. I know google is hated by one and all, but in this instance I couldn’t find any comparable site which does this kind of thing.

As can be seen in the graph, the tickets had started relatively cheap from around INR 65k ish to around 80k ish at this point in time. That is a jump of around 30% in the last couple of months. All of these flights have a layover somewhere in Europe and taking a second flight from there to Canada.

The one which didn’t show much of action is the direct plan between BOM – YUL and back but then this seems to be a premium service . Taking a direct flight from BOM – YUL is north of INR 90k/- which doesn’t make much sense unless one is fond of spending 13+ hours in flight. Definitely not my cup of tea.

With layovers it makes the experience a bit more bearable.

While the real action is probably 3 or bit more months away, its interesting to see how things are panning out at least on airline price tickets and the dynamics involved therein.

Even with all the above attempts at finding the answer, I’m no closer to figuring out to estimate airline ticket prices when window is largish 3 months in making. Any ideas anybody?

If the previous jump is any indication, then 10-15% escalation bit might not hack this time around. Any strategies that people could advise while trying to put a ball-park figure.


Filed under: Miscellenous Tagged: #air-travel, #Airfares, #Competition, #Price Estimation, #Sponsorship

Dirk Eddelbuettel: RcppCCTZ 0.2.1

5 February, 2017 - 05:41

A new minor version 0.2.1, of RcppCCTZ is now on CRAN. It corrects a possible shortcoming and rounding in the conversion from internal representation (in C++11 using int64_t) to the two double values for seconds and nanoseconds handed to R. Two other minor changes are also summarized below.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. The RcppCCTZ page has a few usage examples and details.

The changes in this version are summarized here:

Changes in version 0.2.1 (2017-02-04)
  • Conversion from timepoint to two double values now rounds correctly (#14 closing #12, with thanks to Leonardo)

  • The Description was expanded to stress the need for a modern C++11 compiler; g++-4.8 (as on 'trusty' eg in Travis CI) works

  • Travis CI is now driven via run.sh from our fork

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: nanotime 0.1.1

5 February, 2017 - 05:33

A new version of the nanotime package for working with nanosecond timestamps is now on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting, and the bit64 package for the actual integer64 arithmetic.

This release adds an improved default display format always showing nine digits of fractional seconds. It also changes the print() method to call format() first, and we started to provide some better default Ops methods. These fixes were suggested by Matt Dowle. We also corrected a small issue which could lead to precision loss in formatting as pointed out by Leonardo Silvestri.

Changes in version 0.1.1 (2017-02-04)
  • The default display format now always shows nine digits (#10 closing #9)

  • The default print method was updated to use formated output, and a new new converter as.integer64 was added

  • Several 'Ops' method are now explicitly defined allowing casting of results (rather than falling back on bit64 behaviour)

  • The format routine is now more careful about not loosing precision (#13 closing #12)

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Markus Koschany: My Free Software Activities in January 2017

5 February, 2017 - 04:26

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • In January 2017 we had the last chance to get new upstream releases into the next stable release of Debian 9 aka Stretch. Hence I packaged new versions of pygame-sdl2, renpy, fife, unknown-horizons, redeclipse and redeclipse-data and also backported Red Eclipse to Jessie.
  • I uploaded fifechan to unstable and applied an upstream patch to fix a segmentation fault (#852247) in Unknown Horizons.
  • Package cleanups and improvements: freeorion (#843538), I enabled support for mips64el again; I tidied up gtkatlantic, powermanga, lincity-ng, opencity and tecnoballz; I applied a patch from Reiner Herrman to make the build of netpanzer reproducible (#827150); In spring I changed the build-dependency of asciidoc to asciidoc-base (#850387) although it turned out later that this wasn’t strictly needed. I also removed ConvertUTF8 related code from spring because it might be non-free. I don’t think this is necessarily true but I didn’t want to argue with Lintian in this case.
  • I sponsored a new upstream release of pentobi for Juhani Numminen.
  • I backported minetest 0.4.5 to jessie-backports and fixed #851114, which I think was not really an issue since we already provide the font sources in Debian and Minetest depends on the respective package.
  • I triaged RC bug #847812 in pysolfc, provided a patch and reassigned the issue to src:pillow. Apparently this affected a lot more 32 bit applications written in Python.
Debian Java Debian LTS

This was my eleventh month as a paid contributor and I have been paid to work 12,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 16. January until 22. January I was in charge of our LTS frontdesk. I triaged security issues in imagemagick, wordpress, hesiod, opus, mysql-5.5, netbeans, groovy and zoneminder.
  • DLA-779-1. Issued a security update for Tomcat 7 fixing 1 CVE and a regression when running Tomcat with SecurityManager enabled.
  • DLA-761-2. Issued a regression update for python-bottle. (Debian bug #850176).
  • DLA-781-1 and DLA-781-2. Issued a security update for Asterisk fixing 2 CVE after I had prepared the package last month. Later Brad Barnett discovered a regression when using SIP communication and provided assistance with debugging the issue. I corrected this one in DLA-781-2.
  • DLA-792-1. Issued a security update for libphp-swiftmailer fixing 1 CVE.
  • DLA-793-1. Issued a security update for opus fixing 1 CVE.
  • DLA-794-1. Issued a security update for groovy fixing 1 CVE.
  • DLA-797-1. Issued a security update for mysql-5.5 fixing 10 CVE. The update was prepared by Lars Tangvald.
  • DLA-813-1. Issued a security update for wordpress fixing 9 CVE.
Misc
  • In xarchiver (#850103) I added binutils to the list of suggested packages, in  iftop (#850040) I applied a patch from Brian Russell and I packaged a new upstream release of mediathekview, a Java application to watch and download broadcasts from German television stations. I had to make some major packaging changes because the build system switched from Ant to Gradle but there were fewer issues than expected.

Chris Lamb: The ChangeLog #237: Reproducible Builds and Secure Software

5 February, 2017 - 03:39

I recently appeared on the Changelog podcast to talk about the Reproducible Builds project:


Whilst I am an avid podcast listener, this was actually my first podcast appearance. It was an curious and somewhat disconcerting feeling to be "just" talking to Adam and Jerod in the moment yet knowing all the time that anything and everything I said would be distributed more widely in the future.

Ross Gammon: My Monthly Update for January 2017

5 February, 2017 - 00:55

It has been a quiet start to the year due to work keeping me very busy. Most of my spare time (when not sitting shattered on the sofa) was spent resurrecting my old website from backups. My son had plenty of visitors coming to visit as well, which prompted me to restart work on my model railway in the basement. Last year I received a whole heap of track, and also a tunnel formation from a friend at work. I managed to finish the supporting structure for the tunnel, and connect one end of it to the existing track layout. The next step (which will be a bit harder) is to connect the other end of the tunnel into the existing layout. The basement is one of the favourite things for me to keep my son and his friends occupied when there is a visit. The railway and music studio are very popular with the little guests.

Debian
  • Packaged latest Gramps 4.2.5 release for Debian so that it will be part of the Stretch release.
  • Package latest abcmidi release so it too would be part of Stretch. The upstream author had changed his website, so it took a while to locate a tarball.
  • Tested my latest patches to convert Cree.py to Qt5, but found another Qt4 – Qt5 change to take into account (SIGNAL function). I ran out of time to fully investigate that one, before Creepy was booted out of testing again. I am seriously considering the removal of Cree.py from Debian, as the upstream maintainer does not seem very active any more, and I am a little tired of being upstream for a project that I don’t actually use myself. It was only because it was a reverse dependency of osm-gps-map that I originally got involved.
  • Started preparing a Gramps 5.2.5 backport for Jessie, but found that the tests I enabled in unstable were failing in the Jessie build. I need to investigate this further.
Ubuntu
  • Announced the Ubuntu Studio 16.02.2 point release date on the Ubuntu Studio mailing lists asking for testers. The date subsequently got put back to February the 9th.
  • Upgraded my Ubuntu Studio machine from Wily to Xenial.
Other
  • Resurrected my old Drupal Gammon One Name Study website. I used Drupal VM to get the site going again, before transferring it to the new webhost. It was originally a Drupal 7 site, and I did not have the required versions of Ansible & Vagrant on my Ubuntu Xenial machine, so the process was quite involved. I will blog about that separately, as it may be a useful lesson for others. As part of that, I started on a backport of vagrant, but found a bug which I need to follow up on.
  • Also managed to extract my old WordPress blog posts from the same machine that had the failed Drupal instance, and import them into this blog. I also learnt some stuff in that process that I will blog about at some point.
Plan status from last month & update for next month Debian

Before the 5th February 2017 Debian Stretch hard freeze I hope to:

For the Debian Stretch release:

Generally:

  • Finish the Gramps 5.2.5 backport for Jessie.
  • Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release.
  • Begin working again on all the new stuff I want packaged in Debian.
Ubuntu
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages. – Still to do (actually started today)
  • Reapply to become a Contributing Developer. – Still to do
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Started
  • Start testing & bug triaging Ubuntu Studio packages. – Still to do
  • Test Len’s work on ubuntustudio-controls – Still to do
Other
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project. – Done
  • Give JMRI a good try out and look at what it would take to package it. – In progress
  • Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software – fun!).

Jonathan Dowland: Blinkenlights, part 2

5 February, 2017 - 00:17

To start with configuring my NAS to use the new blinkenlights, I thought I'd start with a really easy job: I plug in my iPod, a script runs to back it up, then the iPod gets unmounted. It's one of the simpler jobs to start with because the iPod is a simple block device and there's no encryption in play. For now, I'm also going to assume the LED Is going to be used exclusively for this job. In the future I will want many independent jobs to perhaps use the LED to signal things and figuring out how that will work is going to be much harder.

I'll skip over the journey and go straight to the working solution. I have a systemd job that is used to invoke a sync from the iPod as follows:

[Service]
Type=oneshot
ExecStart=/bin/mount /media/ipod
ExecStart=/usr/local/bin/blinkstick --index 1 --limit 10 --set-color 33c280
ExecStart=/usr/bin/rsync ...
ExecStop=/bin/umount /media/ipod
ExecStop=/usr/local/bin/blinkstick --index 1 --limit 10 --set-color green

[Install]
WantedBy=dev-disk-by\x2duuid-A2EA\x2d96ED.device

[Unit]
OnFailure=blinkstick-fail.service

/media/ipod is a classic mount configured in /etc/fstab. I've done this rather than use the newer systemd .mount units which sadly don't give you enough hooks for running things after unmount or in the failure case. This feels quite unnatural, much more "systemdy" would be to Requires= the mount unit, but I couldn't figure out an easy way to set the LED to green after the unmount. I'm sure it's possible, but convoluted.

The first blinkstick command sets the LED to a colour to indicate "in progress". I explored some of the blinkstick tool's options for a fading or throbbing colour but they didn't work very well. I'll take another look in the future. After the LED is set, the backup job itself runs. The last blinkstick command, which is only run if the previous umount has succeeded, sets the LED to indicate "safe to unplug".

The WantedBy here instructs systemd that when the iPod device-unit is activated, it should activate my backup service. I can refer to the iPod device-unit using this name based on the partition's UUID; this is not the canonical device name that you see if you run systemctl but it's much shorter and crucially its stable, the canonical name depends on exactly where you plugged it in and what other devices might have been connected at the same time.

If something fails, a second unit blinkstick-fail.service gets activated. This is very short:

[Service]
ExecStart=/usr/local/bin/blinkstick --index 1 --limit 50 --set-color red

This simply sets the LED to be red.

Again it's a bit awkward that in 2 cases I'm setting the LED with a simple Exec but in the third I have to activate a separate systemd service: this seems to be the nature of the beast. At least when I come to look at concurrent jobs all interacting with the LED, the failure case should be simple: red trumps any other activity, user must go and check what's up.

Thorsten Alteholz: My Debian Activities in January 2017

4 February, 2017 - 23:27

FTP assistant

This month I only marked 146 packages for accept and rejected 25 packages. I only sent 3 emails to maintainers asking questions.

Nevertheless I could pass a big mark. All in all I accepted more than 10000 packages now!

Debian LTS

This was my thirty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 12.75h. During that time I did uploads of

  • [DLA 805-1] bind9 security update for three CVEs
  • [DLA 806-1] zoneminder security update for one CVE

Unfortunately the upload of jasper had to be postponed, as there is no upstream fix for most of the open CVEs yet.
I also suggested to mark th slum-llnl CVE as , as the patch would be too invasive. Further I did another week of frontdesk work.

Last but not least I took care of about 140 items of the TODO list[1]. Ok, it was not that much work, but the enormous number is impressing :-). I also had a look at [2] and filed bugs against two packages. Within hours the maintainers responded to that bugs, clarified everything to mark the CVEs as not-affected and nobody has to care about them anymore. This is a good example of how the knowledge of the maintainer can help the security teams! So, if you have some time left, have a look at [3] and take care of something.

[1] https://security-tracker.debian.org/tracker/status/todo
[2] https://security-tracker.debian.org/tracker/status/unreported
[3] https://security-tracker.debian.org/tracker

Other stuff

This month I sponsored a new round of sidedoor and printrun. After advocating Dara Adib to become Debian Maintainer, I hope my activities as sponsor can be reduced again :-).

Further I uploaded another version of setserial, but as you can see in #850762 it does not seem to satisfy everybody. I also uploaded new upstream versions of duktape and pipexec.

As I didn’t do any DOPOM in December I adopted two packages in January: pescetti and salliere. I dedicate those uploads to my aunt Birgit, who was a passionate bridge player. You will never be forgotten.

Niels Thykier: The stretch freeze is coming

4 February, 2017 - 20:53

The soft freeze has been on going for almost a month now and the full stretch freeze will start tomorrow night (UTC).  It has definitely been visible in the number of unblock requests that we have received so far.  Fortunately, we are no where near the rate of the jessie freeze.  At the moment, all unblock requests are waiting for the submitter (either for a clarification or an upload).

Looking at stretch at a glance (items are in no particular order):

Secure boot support

Currently, we are blocked on two items:

  • We do not have signing done yet for the boot packages (not even manual signing).
  • Our shim is not yet signed, so no hardware would be trusting our boot chain out of the box.

After they are done, we are missing a handful of uploads to provide a signed bootloader etc. plus d-i and some infrastructure bits need to be updated. At the moment, we are waiting for a handful of key people/organisations to move on their part. As such, there is not a lot you can do to assist here (unless you are already involved in the work).
On the flip side, if both of these items are resolved soon, there is a good chance that we can support secure boot in stretch.See bug#820036 and blockers for more information on the remaining items.

Where can you help with the release?

At the moment, the best you can do is to:

  • Test (packages, upgrades, etc.) and report bugs
  • File bugs against release-notes for issues that should be documented
  • Fix RC bugs (please see the next section)

 

Release Critical Bug report

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1148 (Including 193 bugs affecting key packages)
    • Affecting stretch: 294 (key packages: 158)
      That’s the number we need to get down to zero before the release. They can be split in two big categories:

      • Affecting stretch and unstable: 232 (key packages: 134)
        Those need someone to find a fix, or to finish the work to upload a fix to unstable:

        • 30 bugs are tagged ‘patch’. (key packages: 21)
          Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 17 bugs are marked as done, but still affect unstable. (key packages: 5)
          This can happen due to missing builds on some architectures, for example. Help investigate!
        • 185 bugs are neither tagged patch, nor marked done. (key packages: 108)
          Help make a first step towards resolution!
      • Affecting stretch only: 62 (key packages: 24)
        Those are already fixed in unstable, but the fix still needs to migrate to stretch. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.

        • 36 bugs are in packages that are unblocked by the release team. (key packages: 14)
        • 26 bugs are in packages that are not unblocked. (key packages: 10)

Filed under: Debian, Release-Team

Holger Levsen: Going to FOSDEM after switching away from Debian

4 February, 2017 - 03:21

So last weekend I attended Devconf.cz in Brno and together with Dennis Gilmore gave a talk about Reproducible Builds and Fedora (Slides) which was fun and - I think - pretty well received.

Then on the following Tuesday, so three days ago, after using Debian as the primary OS on my primary computer for more than two decades, I finally made the long anticipated switch and I must say, so far I'm really happy with my new environment, even though I'm not yet again using a tiled window manager (but instead Xfce), cannot yet do all the things I could do before and even have to reconfigure a VM twice and restart another to get back wireless network after suspend and resume…

So what have I done? Nothing too dramatic: I've switched to Qubes OS and now I'm running Debian jessie and stretch and Fedora and Whonix on my primary computer, just not as my primary OS

IOW: hardware is basically software and while I want the red pill, who knows what's really inside the box?

If you have no idea what I'm talking about, this intro about Qubes OS might be helpful. Or you dive directly into this six year old post, where Joanna Rutkowska described how she partitioned her digital life using Qubes.

I'm not sure where this journey will lead me to, but I'm confident this is the right direction. And in case you wondered, I'll keep working on Debian, as far as I know now

Benjamin Mako Hill: New Dataset: Five Years of Longitudinal Data from Scratch

4 February, 2017 - 03:01

Scratch is a block-based programming language created by the Lifelong Kindergarten Group (LLK) at the MIT Media Lab. Scratch gives kids the power to use programming to create their own interactive animations and computer games. Since 2007, the online community that allows Scratch programmers to share, remix, and socialize around their projects has drawn more than 16 million users who have shared nearly 20 million projects and more than 100 million comments. It is one of the most popular ways for kids to learn programming and among the larger online communities for kids in general.

Front page of the Scratch online community (https://scratch.mit.edu) during the period covered by the dataset.

Since 2010, I have published a series of papers using quantitative data collected from the database behind the Scratch online community. As the source of data for many of my first quantitative and data scientific papers, it’s not a major exaggeration to say that I have built my academic career on the dataset.

I was able to do this work because I happened to be doing my masters in a research group that shared a physical space (“The Cube”) with LLK and because I was friends with Andrés Monroy-Hernández, who started in my masters cohort at the Media Lab. A year or so after we met, Andrés conceived of the Scratch online community and created the first version for his masters thesis project. Because I was at MIT and because I knew the right people, I was able to get added to the IRB protocols and jump through the hoops necessary to get access to the database.

Over the years, Andrés and I have heard over and over, in conversation and in reviews of our papers, that we were privileged to have access to such a rich dataset. More than three years ago, Andrés and I began trying to figure out how we might broaden this access. Andrés had the idea of taking advantage of the launch of Scratch 2.0 in 2013 to focus on trying to release the first five years of Scratch 1.x online community data (March 2007 through March 2012) — most of the period that the codebase he had written ran the site.

After more work than I have put into any single research paper or project, Andrés and I have published a data descriptor in Nature’s new journal Scientific Data. This means that the data is now accessible to other researchers. The data includes five years of detailed longitudinal data organized in 32 tables with information drawn from more than 1 million Scratch users, nearly 2 million Scratch projects, more than 10 million comments, more than 30 million visits to Scratch projects, and much more. The dataset includes metadata on user behavior as well the full source code for every project. Alongside the data is the source code for all of the software that ran the website and that users used to create the projects as well as the code used to produce the dataset we’ve released.

Releasing the dataset was a complicated process. First, we had navigate important ethical concerns about the the impact that a release of any data might have on Scratch’s users. Toward that end, we worked closely with the Scratch team and the the ethics board at MIT to design a protocol for the release that balanced these risks with the benefit of a release. The most important features of our approach in this regard is that the dataset we’re releasing is limited to only public data. Although the data is public, we understand that computational access to data is different in important ways to access via a browser or API. As a result, we’re requiring anybody interested in the data to tell us who they are and agree to a detailed usage agreement. The Scratch team will vet these applicants. Although we’re worried that this creates a barrier to access, we think this approach strikes a reasonable balance.

Beyond the the social and ethical issues, creating the dataset was an enormous task. Andrés and I spent Sunday afternoons over much of the last three years going column-by-column through the MySQL database that ran Scratch. We looked through the source code and the version control system to figure out how the data was created. We spent an enormous amount of time trying to figure out which columns and rows were public. Most of our work went into creating detailed codebooks and documentation that we hope makes the process of using this data much easier for others (the data descriptor is just a brief overview of what’s available). Serializing some of the larger tables took days of computer time.

In this process, we had a huge amount of help from many others including an enormous amount of time and support from Mitch Resnick, Natalie Rusk, Sayamindu Dasgupta, and Benjamin Berg at MIT as well as from many other on the Scratch Team. We also had an enormous amount of feedback from a group of a couple dozen researchers who tested the release as well as others who helped us work through through the technical, social, and ethical challenges. The National Science Foundation funded both my work on the project and the creation of Scratch itself.

Because access to data has been limited, there has been less research on Scratch than the importance of the system warrants. We hope our work will change this. We can imagine studies using the dataset by scholars in communication, computer science, education, sociology, network science, and beyond. We’re hoping that by opening up this dataset to others, scholars with different interests, different questions, and in different fields can benefit in the way that Andrés and I have. I suspect that there are other careers waiting to be made with this dataset and I’m excited by the prospect of watching those careers develop.

You can find out more about the dataset, and how to apply for access, by reading the data descriptor on Nature’s website.

Marc 'Zugschlus' Haber: insecssh

4 February, 2017 - 01:52

Yes, You Should Not discard cached ssh host keys without looking. An unexpected change of an ssh host key is always a reason to step back from the keyboard and think. However, there are situations when you know that a systems’ ssh host key has changed, for example when the system reachable under this host name has been redeployed, which happens increasingly often proportionally to the devopsness of your environment, or for example in test environments.

Later versions of ssh offer you the ssh-keygen -R command line to paste from the error message, so that you can abort the connection attempt, paste the command and reconnect again. This will still ask for confirmation of the new host key though.

Almost every sysadmin has an alias or wrapper to make handling of this situation easier. Solutions range from using “StrictHostKeyChecking no” and/or “UserKnownHostsFile /dev/null”, turning off this layer of securit altogether either globally or usually too broadly, to more-or-less sophisticated solutions that involve turning off know-host file hashing, parsing client output and/or grep-sed-awk magic. grml even comes with an insecssh script that is rather neat and that I used until I developed my own.

Later ssh versions (not including the 6.7 in Debian jessie, but the 7.4 in debian sid) do offer a -G command line option which will give access to configuration options after the complete client configurtion was processed. This will allow you to neatly get access to the actual hostname the ssh client will connect to. This can, in turn be used to obtain the IP address so that all traces of the host and its IP addresses can be removed from the known_hosts file before the actual connection is done.

The following script is the insecssh that I currently use. It only uses documented interfaces to the ssh client which I consider rather neat.

#!/bin/bash

HOSTNAME=$(ssh -G “$@” | grep ‘^hostname ' | awk ‘{print $2}’)
CNAME=“1”
while [ -n “$CNAME” ]; do
  CNAME=$(dig +short +search -t CNAME “$HOSTNAME”)
  if [ -n “$CNAME” ]; then
    HOSTNAME=“$CNAME”
  fi
done
IP4ADDRESS=$(dig +short +search -t A “$HOSTNAME”)
IP6ADDRESS=$(dig +short +search -t AAAA “$HOSTNAME”)

echo “HOSTNAME $HOSTNAME”
echo “IP4ADDRESS $IP4ADDRESS”
echo “IP6ADDRESS $IP6ADDRESS”

[ -n “$HOSTNAME” ] && ssh-keygen -R “$HOSTNAME”
[ -n “$IP4ADDRESS” ] && ssh-keygen -R “$IP4ADDRESS”
[ -n “$IP6ADDRESS” ] && ssh-keygen -R “$IP6ADDRESS”

ssh -oStrictHostKeyChecking=no -oVisualHostKey=yes “$@”

Ugliness results from the necessity of following CNAMEs manually since there is - to my knowlegde (educate me!) - no command line utility that has the output simplicity and selection powers of dig which can automatically follow CNAMEs without relying on the recursor having the CNAME cached or not.

Use this script if you know that a host has changed its host key. It will first zap all knowledge of the previous host key from known_hosts and then invoke the ssh client with the given arguments from its command line, with options added so that you’ll see the new host key in random art and that you don’t need to manually confirm the new host key.

Jonathan Dowland: Blinkenlights!

3 February, 2017 - 22:35

blinkenlights!

Late last year, I was pondering how one might add a status indicator to a headless machine like my NAS to indicate things like failed jobs.

After a brief run through of some options (a USB-based custom device; a device pretending to be a keyboard attached to a PS/2 port; commandeering the HD activity LED; commandeering the PC speaker wire) I decided that I didn't have the time to learn the kind of skills needed to build something at that level and opted to buy a pre-assembled programmable USB thing instead, called the BlinkStick.

Little did I realise that my friend Jonathan McDowell thought that this was an interesting challenge and actually managed to design, code and build something! Here's his blog post outlining his solution and here's his code on github (or canonically)

Even thought I've bought the blinkstick, given Jonathan's efforts (and the bill of materials) I'm going to have to try and assemble this for myself and give it a go. I've also managed to borrow an Arduino book from a colleague at work.

Either way, I still have some work to do on the software/configuration side to light the LEDs up at the right time and colour based on the jobs running on the NAS and their state.

Sven Hoexter: chromium --enable-remote-extensions

3 February, 2017 - 20:31

From time to time I've to use chromium for creepy stuff like lifesize video sharing with document sharing. The document sharing requires a chromium extensions. Suddenly that stopped working today and I could not reinstall the extension. After trying a lot of stuff I had a look at the debian changelog and found out about:

chromium --enable-remote-extensions

See also #851927.

Dirk Eddelbuettel: RPushbullet 0.3.0

3 February, 2017 - 18:54

A major new update of the RPushbullet package is now on CRAN. RPushbullet interfacing the neat Pushbullet service for inter-device messaging, communication, and more. It lets you easily send alerts like the one to the to your browser, phone, tablet, ... -- or all at once.

This release owes a lot to Seth Wenchel who was instrumental in driving several key refactorings. We now use the curl package instead of relying on system() calls to the binary. We also switched from RJSONIO to jsonlite. A new helper function to create the required resourcefile was added, and several other changes were made as detailed below in the extract from the NEWS.Rd file.

Changes in version 0.3.0 (2017-02-03)
  • The curl binary use was replaced by use of the curl package; several new helper functions added (PRs #30, #36 by Seth closing #29)

  • Use of RJSONIO was replaced by use of jsonlite (PR #32 by Seth closing #31)

  • A new function pbSetup was added to aid creating the resource file (PRs #34, #37 by Seth and Dirk)

  • The package intialization was refactored so that non-loading calls such as RPushbullet::pbPost(...) now work (#33 closing #26)

  • The test suite was updated and extended

  • The Travis script was updated use run.sh

  • DESCRIPTION, README.md and other files were updated for current R CMD check standards

  • Deprecated parts such as 'type=address' were removed, and the documentation was updated accordingly.

  • Coverage support was added (in a 'on-demand' setting as automated runs would need a Pushbullet API token)

Courtesy of CRANberries, there is also a diffstat report for this release.

More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Petter Reinholdtsen: A day in court challenging seizure of popcorn-time.no for #domstolkontroll

3 February, 2017 - 17:10

On Wednesday, I spent the entire day in court in Follo Tingrett representing the member association NUUG, alongside the member association EFN and the DNS registrar IMC, challenging the seizure of the DNS name popcorn-time.no. It was interesting to sit in a court of law for the first time in my life. Our team can be seen in the picture above: attorney Ola Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil Eriksen and NUUG board member Petter Reinholdtsen.

The case at hand is that the Norwegian National Authority for Investigation and Prosecution of Economic and Environmental Crime (aka Økokrim) decided on their own, to seizure a DNS domain early last year, without following the official policy of the Norwegian DNS authority which require a court decision. The web site in question was a site covering Popcorn Time. And Popcorn Time is the name of a technology with both legal and illegal applications. Popcorn Time is a client combining searching a Bittorrent directory available on the Internet with downloading/distribute content via Bittorrent and playing the downloaded content on screen. It can be used illegally if it is used to distribute content against the will of the right holder, but it can also be used legally to play a lot of content, for example the millions of movies available from the Internet Archive or the collection available from Vodo. We created a video demonstrating legally use of Popcorn Time and played it in Court. It can of course be downloaded using Bittorrent.

I did not quite know what to expect from a day in court. The government held on to their version of the story and we held on to ours, and I hope the judge is able to make sense of it all. We will know in two weeks time. Unfortunately I do not have high hopes, as the Government have the upper hand here with more knowledge about the case, better training in handling criminal law and in general higher standing in the courts than fairly unknown DNS registrar and member associations. It is expensive to be right also in Norway. So far the case have cost more than NOK 70 000,-. To help fund the case, NUUG and EFN have asked for donations, and managed to collect around NOK 25 000,- so far. Given the presentation from the Government, I expect the government to appeal if the case go our way. And if the case do not go our way, I hope we have enough funding to appeal.

From the other side came two people from Økokrim. On the benches, appearing to be part of the group from the government were two people from the Simonsen Vogt Wiik lawyer office, and three others I am not quite sure who was. Økokrim had proposed to present two witnesses from The Motion Picture Association, but this was rejected because they did not speak Norwegian and it was a bit late to bring in a translator, but perhaps the two from MPA were present anyway. All seven appeared to know each other. Good to see the case is take seriously.

If you, like me, believe the courts should be involved before a DNS domain is hijacked by the government, or you believe the Popcorn Time technology have a lot of useful and legal applications, I suggest you too donate to the NUUG defense fund. Both Bitcoin and bank transfer are available. If NUUG get more than we need for the legal action (very unlikely), the rest will be spend promoting free software, open standards and unix-like operating systems in Norway, so no matter what happen the money will be put to good use.

If you want to lean more about the case, I recommend you check out the blog posts from NUUG covering the case. They cover the legal arguments on both sides.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้