Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 13 min 43 sec ago

Michal Čihař: CI coverage from Windows, Linux and OSX

4 hours 21 min ago

Once I got CI working on multiple platforms the obvious next step was to be able to aggregate coverage reports across them. This should not be that hard, right? Well I've spent couple of hours on that during last few days.

On Linux and OSX it was pretty much straightforward. Both GCC and Clang do support coverage, so it's just matter of configuring them properly and collect the coverage reports. I've used own solution for that in past and that was really far from working well (somehow I never managed to get coverage fully uploaded to Codecov). Fortunately there exists CMake script called CMake-codecov which does all needed work and works out of the box on GCC and Clang (even on OSX). Well it works on Travis only once you update the compilers and install llvm-cov tool.

The Windows part on AppVeyor was much harder for me. This can be heavily accounted to lack of my experience with Windows and especially development on Windows in past ten years (probably even more). First challenge was to find something what can generate code coverage there.

After lot of googling I've settled down on OpenCppCoverage what seems to be the only free solution I was able to find. The good thing is that it can generate coverage in Cobertura format that Codecov undestands. There are also bad things that I've learned. First of all it's quite hard to integrate this with CTest. There is no support for wrapping test calls in custom commands, so I've misused the memory checks for that purpose. I've written small python script which pretends the valgrind interface and does call OpenCppCoverage in the background.

Now I had around 800 coverage files (one for each test case) and we need to deal with them somehow. The Codeconv command line client doesn't deal wit this out of the box so the obvious choice was to merge them before upload. There even seems to be script doing that, but unfortunately trying that on our coverage data make it nowhere near completion within hour, so that's not really good choice. Second thing I've tried was merging binary coverage in OpenCppCoverage and then exporting to Cobertura format. Obviously Gammu is special project as all I got from this attempt was crashing OpenCppCoverage (it did merge some of the coverages, but it failed in the end without indicating any error).

In the end I've settled down to uploading files in chunks to Codecov. This seems to work quite okay, though is a bit slow, mostly due to way how Codecov bash uploader prepares data to upload (but this will be hopefully fixed soon).

Anyway the goal has been reached, both Windows and Linux code shows in coverage reports.

Filed under: Debian English Gammu | 0 comments

Dirk Eddelbuettel: RcppArmadillo 0.7.400.2.0

7 hours 18 min ago

Another Armadillo 7.* release -- now at 7.400. We skipped the 7.300.* serie release as it came too soon after our most recent CRAN release. Releasing RcppArmadillo 0.7.400.2.0 now keeps us at the (roughly monthly) cadence which works as a good compromise between getting updates out at Conrad's sometimes frantic pace, while keeping CRAN (and Debian) uploads to about once per month.

So we may continue the pattern of helping Conrad with thorough regression tests by building against all (by now 253 (!!)) CRAN dependencies, but keeping release at the GitHub repo and only uploading to CRAN at most once a month.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

The new upstream release adds new more helper functions. Detailed changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.400.2.0 (2016-08-24)
  • Upgraded to Armadillo release 7.400.2 (Feral Winter Deluxe)

    • added expmat_sym(), logmat_sympd(), sqrtmat_sympd()

    • added .replace()

Changes in RcppArmadillo version 0.7.300.1.0 (2016-07-30)
  • Upgraded to Armadillo release 7.300.1

    • added index_min() and index_max() standalone functions

    • expanded .subvec() to accept size() arguments

    • more robust handling of non-square matrices by lu()

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Matthew Garrett: Priorities in security

8 hours 19 min ago
I read this tweet a couple of weeks ago:

to me, an inclusive security community would focus as much (or at all) on surveillance of women by abusive partners as it does the state

— kelsey ᕕ( ᐛ )ᕗ (@_K_E_L_S_E_Y) August 2, 2016
and it got me thinking. Security research is often derided as unnecessary stunt hacking, proving insecurity in things that are sufficiently niche or in ways that involve sufficient effort that the realistic probability of any individual being targeted is near zero. Fixing these issues is basically defending you against nation states (who (a) probably don't care, and (b) will probably just find some other way) and, uh, security researchers (who (a) probably don't care, and (b) see (a)).

Unfortunately, this may be insufficient. As basically anyone who's spent any time anywhere near the security industry will testify, many security researchers are not the nicest people. Some of them will end up as abusive partners, and they'll have both the ability and desire to keep track of their partners and ex-partners. As designers and implementers, we owe it to these people to make software as secure as we can rather than assuming that a certain level of adversary is unstoppable. "Can a state-level actor break this" may be something we can legitimately write off. "Can a security expert continue reading their ex-partner's email" shouldn't be.

comments

Alessio Treglia: The Breath of Time

25 August, 2016 - 23:33

 

For centuries man has hunted, he brought the animals to pasture, cultivated fields and sailed the seas without any kind of tool to measure time. Back then, the time was not measured, but only estimated with vague approximation and its pace was enough to dictate the steps of the day and the life of man. Subsequently, for many centuries, hourglasses accompanied the civilization with the slow flow of their sand grains. About hourglasses, Ernst Junger writes in “Das Sanduhrbuch – 1954” (no English translation): “This small mountain, formed by all the moments lost that fell on each other, it could be understood as a comforting sign that the time disappears but does not fade. It grows in depth”.

For the philosophers of ancient Greece, the time was just a way to measure how things move in everyday life and in any case there was a clear distinction between “quantitative” time (Kronos) and “qualitative” time (Kairòs). According to Parmenides, time is guise, because its existence…

<Read More…[by Fabio Marzocca]>

Joerg Jaspert: New gnupg-agent in Debian

25 August, 2016 - 16:55

In case you just upgraded to the latest gnupg-agent and used gnupg-agent as your ssh-agent you may find that ssh refuses to work with a simple but not helpful

sign_and_send_pubkey: signing failed: agent refused operation

This seems to come from systemd starting the agent, no longer a script at the start of the X session. And so it ends up with either no or an unusable tty. A simple

gpg-connect-agent updatestartuptty /bye

updates that and voila, ssh agent functionality is back in.

Note: This assumes you have “enable-ssh-support” in your ~/.gnupg/gpg-agent.conf

Norbert Preining: Gaming: Deus Ex Go

25 August, 2016 - 16:34

Long flights and lazy afternoons relaxing from teaching, I tried out another game on my Android device, Deus Ex Go. It is a turn based game in the style of the Deus Ex series (long years ago I was beta tester for the Deus Ex version of LGP). A turn based game where you have to pass through a series of levels, each one consisting of an hexagonal grid with an entry and exit point, and some nasty villains or machines trying to kill you.

Without any explanations given you are thrown into the game and it takes a few iterations until you understand what kind of attacks you are facing, but once you have figured that out, it is a more or less simple game of combination how to manage to get to the exit. I played through the around 50 levels of the story mode and I think it was only in the last five that I once or twice had to actually try and think hard to find a solution.

I found the game quite amusing at the beginning, but soon it became repetitive. But since you can play through the whole story mode in probably one long afternoon, that is not so much of a problem. More a problem is the apparently incredible battery usage of this game. Playing without checking for some time leaves you soon with a near empty battery.

Graphically well done, with more or less interesting gameplay, it still does not stand up to Monument Valley.

Francois Marier: Debugging gnome-session problems on Ubuntu 14.04

25 August, 2016 - 12:00

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial
Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
init: Déconnecté du bus D-Bus notifié
init: Le processus logrotate main (11831) a été tué par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update
apt-file search libwayland-egl.so.1

and found that this file is provided by the following packages:

  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.

Don Armstrong: H3ABioNet Hackathon (Workflows)

24 August, 2016 - 21:40

I'm in Pretoria, South Africa at the H3ABioNet hackathon which is developing workflows for Illumina chip genotyping, imputation, 16S rRNA sequencing, and population structure/association testing. Currently, I'm working with the imputation stream and we're using Nextflow to deploy an IMPUTE-based imputation workflow with Docker and NCSA's openstack-based cloud (Nebula) underneath.

The OpenStack command line clients (nova and cinder) seem to be pretty usable to automate bringing up a fleet of VMs and the cloud-init package which is present in the images makes configuring the images pretty simple.

Now if I just knew of a better shared object store which was supported by Nextflow in OpenStack besides mounting an NFS share, things would be better.

You can follow our progress in our git repo: [https://github.com/h3abionet/chipimputation]

Zlatan Todorić: Take that boredom

24 August, 2016 - 12:45

While I was bored on Defcon, I took the smallest VPS in DO offering (512MB RAM, 20GB disk), configured nginx on it, bought domain zlatan.tech and cp'ed my blog data to blog.zlatan.tech. I thought it will just be out of boredom and tear it apart in a day or two but it is still there.

Not only that, the droplet came with Debian 8.5 but I just added unstable and experimental to it and upgraded. Just to experiment and see what time will I need to break it. To make it even more adventurous (and also force me to not take it too much serious, at least at this point) I did something on what Lars would scream - I did not enable backups!

While having fun with it I added letsencrypt certificate to it (wow, that was quite easy).

Then I installed and configured Tor. Ende up adding an .onion domain for it! It is: pvgbzphm622hv4bo.onion

My main blog is still going to be zgrimshell.github.io (for now at least) where I push my Nikola (static site generator written in python) generated content as git commits. To my other two domains (on my server) I just rsync the content now. Simple and efficient.

I must admit I like my blog layout. It is simple, easy to read, efficient and fast, I don't bother with comments and writing a blog in markdown (inside terminal as all good behaving hacker citizen) while compiling it with Nikola is breeze (and yes, I did choose Nikola because of Nikola Tesla and python). Also I must admit that nginx is pretty nice webserver, no need to explain the beauty of git but I can't recommend enough of rsync.

If anyone is interested in doing the same I am happy to talk about it but these tools are really simple (as I enjoy simple things and by simple I mean small tools, no complicated configs and easy execution).

Reproducible builds folks: Reproducible Builds: week 69 in Stretch cycle

23 August, 2016 - 19:56

What happened in the Reproducible Builds effort between Sunday August 14 and Saturday August 20 2016:

Fasten your seatbelts

Important note: we enabled build path variation for unstable now, so your package(s) might become unreproducible, while previously it was said to be reproducible… given a specific build path it probably still is reproducible but read on for the details below in the tests.reproducible-builds.org section! As said many times: this is still research and we are working to make it reality.

Media coverage

Daniel Stender blogged about python packaging and explained some caveats regarding reproducible builds.

Toolchain developments

Thomas Schmitt uploaded xorriso which now obeys SOURCE_DATE_EPOCH. As stated in its man pages:

ENVIRONMENT
[...]
SOURCE_DATE_EPOCH  belongs to the specs of reproducible-builds.org.  It
is supposed to be either undefined or to contain a decimal number which
tells the seconds since january 1st 1970. If it contains a number, then
it is used as time value to set the  default  of  --modification-date=,
--gpt_disk_guid,  and  --set_all_file_dates.  Startup files and program
options can override the effect of SOURCE_DATE_EPOCH.
Packages reviewed and fixed, and bugs filed

The following packages have become reproducible after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

  • vulkan/1.0.21.0+dfsg1-1 by Timo Aaltonen.

The following 2 packages were not changed, but have become reproducible due to changes in their build-dependencies: tagsoup tclx8.4.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Bug tracker house keeping:

  • Chris Lamb pinged 164 bugs he filed more than 90 days ago which have a patch and had no maintainer reaction.
Reviews of unreproducible packages

55 package reviews have been added, 161 have been updated and 136 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (16)
  • Santiago Vila (2)
diffoscope development

Chris Lamb, Holger Levsen and Mattia Rizzolo worked on diffoscope this week.

Improvements were made to SquashFS and JSON comparison, the https://try.diffoscope.org/ web service, documentation, packaging, and general code quality.

diffoscope 57, 58, and 59 were uploaded to unstable by Chris Lamb. Versions 57 and 58 were both broken, so Holger set up a job on jenkins.debian.net to test diffoscope on each git commit. He also wrote a CONTRIBUTING document to help prevent this from happening in future.

From these efforts, we were also able to learn that diffoscope is now reproducible even when built across multiple architectures:

< h01ger> | https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/diffoscope.html shows these packages were built on amd64:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb
< h01ger> | and on i386:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb
< h01ger> | and on armhf:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb

And those also match the binaries uploaded by Chris in his diffoscope 59 binary upload to ftp.debian.org, yay! Eating our own dogfood and enjoying it!

tests.reproducible-builds.org

Debian related:

  • show percentage of results in the last 24/48h (h01ger)
  • switch python database backend to SQLAlchemy (Valerie)
  • vary build path varitation for unstable and experimental for all architectures. (h01ger)

The last change probably will have an impact you will see: your package might become unreproducible in unstable and this will be shown on tracker.debian.org, while it will still be reproducible in testing.

We've done this, because we think reproducible builds are possible with arbitrary build paths. But: we don't think those are a realistic goal for stretch, where we still recommend to use ´.buildinfo´ to record the build patch and then do rebuilds using that path.

We are doing this, because besides doing theoretical groundwork we also have a practical goal: enable users to independently verify builds. And if they only can do this with a fixed path, so be it. For now

To be clear: for Stretch we recommend that reproducible builds are done in the same build path as the "original" build.

Finally, and just for our future references, when we enabled build path variation on Saturday, August 20th 2016, the numbers for unstable were:

suite all reproducible unreproducible ftbfs depwait not for this arch blacklisted unstable/amd64 24693 21794 (88.2%) 1753 (7.1%) 972 (3.9%) 65 (0.2%) 95 (0.3%) 10 (0.0%) unstable/i386 24693 21182 (85.7%) 2349 (9.5%) 972 (3.9%) 76 (0.3%) 103 (0.4%) 10 (0.0%) unstable/armhf 24693 20889 (84.6%) 2050 (8.3%) 1126 (4.5%) 199 (0.8%) 296 (1.1%) 129 (0.5%) Misc.

Ximin Luo updated our git setup scripts to make it easier for people to write proper descriptions for our repositories.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Sylvain Le Gall: Release of OASIS 0.4.7

23 August, 2016 - 06:51

I am happy to announce the release of OASIS v0.4.7.

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Drop support for OASISFormat 0.2 and 0.1.
  • New plugin "omake" to support build, doc and install actions.
  • Improve automatic tests (Travis CI and AppVeyor)
  • Trim down the dependencies (removed ocaml-gettext, camlp4, ocaml-data-notation)

Features:

  • findlib_directory (beta): to install libraries in sub-directories of findlib.
  • findlib_extra_files (beta): to install extra files with ocamlfind.
  • source_patterns (alpha): to provide module to source file mapping.

This version contains a lot of changes and is the achievement of a huge amount of work. The addition of OMake as a plugin is a huge progress. The overall work has been targeted at making OASIS more library like. This is still a work in progress but we made some clear improvement by getting rid of various side effect (like the requirement of using "chdir" to handle the "-C", which leads to propage ~ctxt everywhere and design OASISFileSystem).

I would like to thanks again the contributor for this release: Spiros Eliopoulos, Paul Snively, Jeremie Dimino, Christopher Zimmermann, Christophe Troestler, Max Mouratov, Jacques-Pascal Deplaix, Geoff Shannon, Simon Cruanes, Vladimir Brankov, Gabriel Radanne, Evgenii Lepikhin, Petter Urkedal, Gerd Stolpmann and Anton Bachin.

Luciano Prestes Cavalcanti: AppRecommender - Last GSoC Report

22 August, 2016 - 23:54
My work on Google Summer of Code is to create a new strategy on AppRecommender, where this strategy should be able to get a referenced package, or a list of referenced packages, then analyze the packages that the user has already installed and make a recommendation using the referenced packages as a base, for example: if the user runs "$ sudo apt install vim", the AppRecommender uses "vim" as the referenced package, and should recommend packages with relation between "vim" and the other packages that the user has installed. This work is done and added to the official AppRecommender repository.   During the GSoC program, more contributions were done with the AppRecommender project helping the system to improve the recommendations, installation and configurations to help Debian package.   The following link contains my commits on AppRecommender: https://github.com/tassia/AppRecommender/commits/master?author=LucianoPC   During the period destined to students get to know the community of the project, I talked with the Debian community about my project to get feedback and ideas. When talking to the Debian community on the IRC channels, we came up with the idea of using the popularity-contest data to improve the recommendations. I talked with my mentors, who approved the idea, then we increased the project scope to use the popularity-contest data to improve the AppRecommender recommendations.   The popularity-contest has several privacy political terms, then we did a research and published, on the Debian Planet, a post that explains why we need the popularity-contest data to improve the recommendations and how we use this data. This post also contains an explanation about the risks and the measures taken to minimize them.   Then two activities were added to be made. One of them is to create a script to be added on popularity-contest. This script is destined to get the popularity-contest data, which is the users' packages, and generate clusters that group these packages analyzing similar users. The other activity is to add collaborative data into the AppRecommender, where this will download the clusters data and use it to improve the recommendations.   The popularity-contest cluster script was done and reviewed by my mentor, but was not integrated into popularity-contest yet. The usage of clusters data into AppRecommender has been already implemented, but still not added on official repository because it is waiting the cluster cript's acceptance into the popularity-contest. This work is not complete, but I will continue working with AppRecommender and Debian community, and with my mentors' help, I will finish this work.   The following link contains my commits on repository with the popularity-contest cluster script's feature, as well as other scripts that I used to improve my work, but the only script that will be sent to popularity-contest is the create_popcon_clusters.py: https://github.com/TCC-AppRecommender/scripts/commits/master?author=LucianoPC   The following link contains my commits on repository with the AppRecommender collaborative data feature:  https://github.com/tassia/AppRecommender/commits/collaborative_data?author=LucianoPC   Google Drive folder with the patch: https://drive.google.com/drive/folders/0BzmGBlxBo2G3Q3F5YjBpRl9yWUk?usp=sharing

Lars Wirzenius: Linux 25 jubilee symposium

22 August, 2016 - 22:03

I gave a talk about the early days of Linux at the jubilee symposium arranged by the University of Helsinki CS department. Below is an outline of what I meant to speak about, but the actual talk didn't follow it exactly. You can compare these to the video once it comes online.

  • Linus and I met at uni, the only 2 Swedish speaking new students that year, so we naturally migrated towards each other.
  • After a year away for military service, got back in touch, summer of
    1. .
  • C & Unix course fall of 1990; Minix.
  • Linus didn't think atime updates in real time were plausible, but I showed him; funnily enough, atime updates have been an issue in Linux until fairly recently, since they slow things down (without being particularly useful)
  • Jan 5, 1991 bought his first PC (i386 + i387 + 4 MiB RAM and a small hard disk); he had a Sinclair QL before that.
  • Played Prince of Persia for a couple of months.
  • Then wanted to learn i386 assembly and multitasking.
  • A/B threading demo.
  • Terminal emulation, Usenet access from home.
  • Hard disk driver, mistaking hard disk for a modem.
  • More ambition, announced Linux to the world for the first time
  • first ever Linux installation.
  • Upload to ftp.funet.fi, directory name by Ari Lemmke.
  • Originally not free software, licence changed early 1992.
  • First mailing list was created and introduced me to a flood of email (managed with VAX/VMS MAIL and later mush on Unix).
  • I talked a lot with Linus about design at this time, but never really participated in the kernel work (partly because disagreeing with Linus is a high-stress thing).
  • However, I did write the first sprintf for the kernel, since Linus hadn't learnt about varargs functions in C; he then ruined it and added the comment "Wirzenius wrote this portably..." (add google hit count for wirzenius+fucked).
  • During 1992 Linux grew fast, and distros happened, and a lot of packaging and porting of software; porting was easier because Linus was happy to add/change things in the kernel to accomodate software
  • A lot of new users during 1992 as well.
  • End of 1992 I and a few others founded the Linux Documentation Project to help all the new users, some of who didn't come from a Unix background.
  • In fact, things progressed so fast in 1992 that Linus thought he'd release 1.0 very soon, resulting in a silly sequence of version numbers: 0.12, 0.95, 0.96, 0.96b, 0.96c, 0.96c++2.
  • X server ported to Linux; almost immediate prediction of the year of the Linux desktop never happening unless ALL the graphics cards were supported immediately.
  • Linus was of the opinion that you needed one process (not thread) per window in X; I taught him event driven programming.
  • Bug in network code, resulting in ban on uni network.
  • Pranks in the shared office room.
  • We released 1.0 in an event at the CS dept in March, 1994; this included some talks and a ritual compilation of the release version during the event.

Satyam Zode: Google Summer of Code 2016 : Final Report

22 August, 2016 - 21:02
Project Title : Improving diffoscope tool and reproducibility of Debian packages

Project details

This project aims to improve diffoscope tool and fix Debian packages which are unreproducible in Reproducible builds testing framework. diffoscope recursively unpack archives of many kinds and transform various binary formats into more human readable form to compare them. As a part of this project I worked on argument completion feature and ignoring .buildinfo feature. This project is a part of Reproducible Builds effort

Mentor and Co-Mentor
  • Jérémy Bobbio (Lunar) : Mentor
  • Reiner Herrmann (deki) : Co-Mentor
  • Holger Levsen (h01ger) : Co-Mentor
  • Mattia Rizzolo (mapreri) : Co-Mentor

Project Discussion

  • Introduction to Reproducible Builds in Debian

    First time I came to know about Reproducible Builds was during Debconf 2015. I started to get involve from the start of March 2016. At the beginning Lunar suggested me to watch the talks given on Reproducible Builds wiki. I read documentation on Reproducible Builds site and started to participate in IRC discussions on #debian-reproducible.

  • Application Review Period

    During proposal discussion period we discussed the areas where work needs to be done. I wrote the proposal and got it reviewed by community on the mailing list. Simultaneously, I worked on bug #818111 and submitted patch for same. That not only helped me to understand the concept of Reproducible Builds but also helped me to setup testing environment required to check the reproducibility of Debian packages.

  • Community Bonding Period

    During community bonding period I studied the codebase of diffoscope and also spent enough amount of time for learning Python3 metaprogramming and other OOP concepts. We also discussed more about hiding differences and options for same. I couldn’t finish my project research work during this period since I had exams in May 2016 and it consumed almost half of community bonding period and week 1 of coding period.

Project Implementation

Challenges and Work Left
  • To understand the main purpose of diffoscope in the context of Reproducible Builds. I had to go through complete Reproducible Builds project. It consumed significant amount of time to understand what Reproducible Builds is, why it’s necessary important for Free software to build reproducibly. Diffoscope is the last tool in Reproducible Builds toolchain. It was a big challenge for me to understand whole process and objective of diffoscope.
  • Work Left:

Future work
  • Based on the research work and implementation done during Coding Period make diffoscope better and enhance ignoring capabilities of diffoscope.
  • Improve the parallel processing feature of diffoscope. This particular problem is hard to understand and implement.
  • Make diffoscope better by solving exsting bugs.

Acknowledgement

I would like to express my deepest gratitude to Lunar for mentoring me throughout Google Summer of Code program and for being cool. Lunar’s deep knowledge regarding diffoscope and Python skills helped me a lot throughout the project and we literally had great discussions. I would also like to thank Debaian community and Google for giving me this opportunity. Special thanks to Reproducible Builds folks for all the guidance!

DebConf team: Proposing speakers for DebConf17 (Posted by DebConf17 team)

22 August, 2016 - 20:44

As you may already know, next DebConf will be held at Collège de Maisonneuve in Montreal from August 6 to August 12, 2017. We are already thinking about the conference schedule, and the content team is open to suggestions for invited speakers.

Priority will be given to speakers who are not regular DebConf attendees, who are more likely to bring diverse viewpoints to the conference.

Please keep in mind that some speakers may have very busy schedules and need to be booked far in advance. So, we would like to start inviting speakers in the middle of September 2016.

If you would like to suggest a speaker to invite, please follow the procedure described on the Inviting Speakers page of the DebConf wiki.


DebConf17 team

Vincent Sanders: Down the rabbit hole

22 August, 2016 - 19:13
My descent began with a user reporting a bug and I fear I am still on my way down.

The bug was simple enough, a windows bitmap file caused NetSurf to crash. Pretty quickly this was tracked down to the libnsbmp library attempting to decode the file. As to why we have a heavily used library for bitmaps? I am afraid they are part of every icon file and many websites still have favicons using that format.

Some time with a hex editor and the file format specification soon showed that the image in question was malformed and had a bad offset header entry. So I was faced with two issues, firstly that the decoder crashed when presented with badly encoded data and secondly that it failed to deal with incorrect header data.

This is typical of bug reports from real users, the obvious issues have already been encountered by the developers and unit tests formed to prevent them, what remains is harder to produce. After a debugging session with Valgrind and electric fence I discovered the crash was actually caused by running off the front of an allocated block due to an incorrect bounds check. Fixing the bounds check was simple enough as was working round the bad header value and after adding a unit test for the issue I almost moved on.

Almost...

We already used the bitmap test suite of images to check the library decode which was giving us a good 75% or so line coverage (I long ago added coverage testing to our CI system) but I wondered if there was a test set that might increase the coverage and perhaps exercise some more of the bounds checking code. A bit of searching turned up the american fuzzy lop (AFL) projects synthetic corpora of bmp and ico images.

After checking with the AFL authors that the images were usable in our project I added them to our test corpus and discovered a whole heap of trouble. After fixing more bounds checks and signed issues I finally had a library I was pretty sure was solid with over 85% test coverage.

Then I had the idea of actually running AFL on the library. I had been avoiding this because my previous experimentation with other fuzzing utilities had been utter frustration and very poor return on investment of time. Following the quick start guide looked straightforward enough so I thought I would spend a short amount of time and maybe I would learn a useful tool.

I downloaded the AFL source and built it with a simple make which was an encouraging start. The library was compiled in debug mode with AFL instrumentation simply by changing the compiler and linker environment variables.

$ LD=afl-gcc CC=afl-gcc AFL_HARDEN=1 make VARIANT=debug test
afl-cc 2.32b by <lcamtuf@google.com>
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: src/libnsbmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 751 locations (64-bit, hardened mode, ratio 100%).
AR: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/libnsbmp.a
COMPILE: test/decode_bmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 52 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: test/decode_ico.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 65 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_ico
afl-cc 2.32b by <lcamtuf@google.com>
Test bitmap decode
Tests:606 Pass:606 Error:0
Test icon decode
Tests:392 Pass:392 Error:0
TEST: Testing complete

I stuffed the AFL build directory on the end of my PATH, created a directory for the output and ran afl-fuzz

afl-fuzz -i test/bmp -o findings_dir -- ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null

The result was immediate and not a little worrying, within seconds there were crashes and lots of them! Over the next couple of hours I watched as the unique crash total climbed into the triple digits.

I was forced to abort the run at this point as, despite clear warnings in the AFL documentation of the demands of the tool, my laptop was clearly not cut out to do this kind of work and had become distressingly hot.

AFL has a visualisation tool so you can see what kind of progress it is making which produced a graph that showed just how fast it managed to produce crashes and how much the return plateaus after just a few cycles. Although it was finding a new unique crash every ten minutes or so when aborted.

I dove in to analyse the crashes and it immediately became obvious the main issue was caused when the test tool attempted allocations of absurdly large bitmaps. The browser itself uses a heuristic to determine the maximum image size based on used memory and several other values. I simply applied an upper bound of 48 megabytes per decoded image which fits easily within the fuzzers default heap limit of 50 megabytes.

The main source of "hangs" also came from large allocations so once the test was fixed afl-fuzz was re-run with a timeout parameter set to 100ms. This time after several minutes no crashes and only a single hang were found which came as a great relief, at which point my laptop had a hard shutdown due to thermal event!

Once the laptop cooled down I spooled up a more appropriate system to perform this kind of work a 24way 2.1GHz Xeon system. A Debian Jessie guest vm with 20 processors and 20 gigabytes of memory was created and the build replicated and instrumented.

To fully utilise this system the next test run would utilise AFL in parallel mode. In this mode there is a single "master" running all the deterministic checks and many "secondary" instances performing random tweaks.

If I have one tiny annoyance with AFL, it is that breeding and feeding a heard of rabbits by hand is annoying and something I would like to see a convenience utility for.

The warren was left overnight with 19 instances and by morning had generated crashes again. This time though the crashes actually appeared to be real failures.

$ afl-whatsup sync_dir/
Summary stats
=============

Fuzzers alive : 19
Total run time : 5 days, 12 hours
Total execs : 214 million
Cumulative speed : 8317 execs/sec
Pending paths : 0 faves, 542 total
Pending per fuzzer : 0 faves, 28 total (on average)
Crashes found : 554 locally unique

All the crashing test cases are available and a simple file command immediately showed that all the crashing test files had one thing in common the height of the image was -2147483648 This seemingly odd number is actually meaningful to a programmer, it is the largest negative number which can be stored in a 32bit integer (INT32_MIN) I immediately examined the source code that processes the height in the image header.

if ((width <= 0) || (height == 0))          
return BMP_DATA_ERROR;
if (height < 0) {
bmp->reversed = true;
height = -height;
}

The bug is where the height is made a positive number and results in height being set to 0 after the existing check for zero and results in a crash later in execution. A simple fix was applied and test case added removing the crash and any possible future failure due to this.

Another AFL run has been started and after a few hours has yet to find a crash or non false positive hang so it looks like if there are any more crashes to find they are much harder to uncover.

Main lessons learned are:
  • AFL is an easy to use and immensely powerful and effective tool. State of the art has taken a massive step forward.
  • The test harness is part of the test! make sure it does not behave in a poor manner and cause issues itself.
  • Even a library with extensive test coverage and real world users can benefit from this technique. But it remains to be seen how quickly the rate of return will reduce after the initial fixes.
  • Use the right tool for the job! Ensure you head the warnings in the manual as AFL uses a lot of resources including CPU, disc and memory.
I will of course be debugging any new crashes that occur and perhaps turning my sights to all the projects other unit tested libraries. I will also be investigating the generation of our own custom test corpus from AFL to replace the demo set, this will hopefully increase our unit test coverage even further.

Overall this has been my first successful use of a fuzzing tool and a very positive experience. I would wholeheartedly recommend using AFL to find errors and perhaps even integrate as part of a CI system.

Michal &#268;iha&#345;: Continuous integration on multiple platforms

22 August, 2016 - 17:00

Over the weekend I've played with continuous integration for Gammu to make it run on more platforms. I had to remember many things from the Windows world on the way and the solution is not yet complete, but the basic build is working, the only problematic part are external dependencies.

First of all we already have Linux builds on Travis CI. These cover compilation with both GCC and Clang compilers, hopefully covering most of the possible problems.

Recently I've added OS X builds on Travis CI, what was pretty much painless and worked out of the box.

The next major architecture to support is Windows. Once I've discovered AppVeyor I thought it might be the way to go. The have free plans for open-source projects (though it has only one parallel build compared to four provided by Travis CI).

As our build system is cross platform based on CMake, it should work pretty much out of the box, right? Well almost, tweaking the basics took some time (unfortunately there is no CMake support on AppVeyor, so you have to script it a bit).

The most painful things on the way:

  • finding our correct way to invoke build and testsuite
  • our code was broken on Windows, making the testsuite to fail
  • how to work with power shell (no, I'm not going to like it)
  • how to download and install executable to PATH
  • test output integration with AppVeyor - done using XSLT transformation and uploading test results manually
  • 32-bit / 64-bit mess, CMake happily finds 32-bit libs during the 64-bit build and vice versa, what makes the build fail later when linking - fixed by trying if code can be built with given library
  • 64-bit code crashes in dummy driver, causing testsuite failures (this has to be something Windows specific as the code works fine on 64-bit Linux) - this seems to be caused by too big allocations on stack, moving them to heap will fix this

You can check our current appveyor.yml in case you're going to try something similar. Current build results are on AppVeyor.

As a nice side effect, we now have up to date Windows binaries for Gammu.

Filed under: Debian English Gammu | 0 comments

NOKUBI Takatsugu: The 9th typhoon looks like Debian swirl logo

22 August, 2016 - 16:01

According to my follower’s tweet:

@kazken3 台風画像と水平反転したDebianマークが一致.. pic.twitter.com/ymBoRGz9ew

— kuromabo_(:3」∠)_ (@kuromabo) 2016年8月22日

The typhoon image and horizontal flipped Debian logo looks same.

Zlatan Todorić: When you wake up with a feeling

22 August, 2016 - 13:45

I woke up at 5am. Somehow made myself to soon go back to sleep again. Woke up at 6am. Such is the life of jet-lag. Or I am just getting old for it.

But the truth wouldn't be complete with only those assertion. I woke inspired and tired and the same time. Tired because I am doing very time consumable things. Also in the same time very emotional things. AND at the exact same time things that inspire me.

On paper, I am technical leader of Purism. In reality, I have insanely good relations with my CEO for such a short time. So good that I for months were not leading the technical shift only, but also I overtook operations (getting orders and delivering them while working with our assembly line to automate most of the tasks in this field). I was playing also as first line of technical support (forums, IRC and email). Actually I was pretty much the only line of support for few months. I was doing some website changes: change some wording, updating bunch of plugins and making it sure all works, resolved (hopefully) Tor and Cloudflare issues for it, annoying caching system for forums, stopped forum spam and so on. I worked on better messaging for Purism public relations. I thought my team to use keys for signing and encryption. I interviewed (and read all mails) for people that were interested in working or helping Purism. In process of doing all that, I maybe wasn't the most speedy person for all our users needs but I hope they understand and forgive me.

I was doing all that while I was researching and developing tablets (which ended up not being the most successful campaign but we now do have them as product). I was doing all that while seeing (and resolving) that our kernel builds were failing. Worked on pushing touchpad (not so good but we are still working on) patches upstream (and they ended being upstreamed). While seeing repos being down because of our host. Repos being down because of broken sync with Debian. Repos being down because of our key mis-management. Metadata not working well. PureBrowser getting broken all the time. Tor browser out of date. No real ISO updates. Wrong sources.list entries and so on.

And the hardest part on work was, I was doing all this with very limited scope and even more limited resources. So what kept me on, what is pushing me forward and what am I doing?

One philosophy - Free software. Let me not explain it as a technical debt. Let me explain it as social movement. In age, where people are "bombed" by media, by all-time lying politicians (which use fear of non-existent threats/terror as model to control population), in age where proprietary corporations are selling your freedom so you can gain temporary convenience the term Free software is like Giordano Bruno in age of Inquisitions. Free software does not only preserve your Freedom to software source usage but it preserves your Freedom to think and think out of the box and not being punished for that. It preserves the Freedom to live - to choose what and when to do, without having the negative impact on your or others people lives. The Freedom to be transparent and to share. Because not only ideas grow with sharing, but we, as human beings, grow as we share. The Freedom to say "NO".

NO. I somehow learnt, and personally think, that the Freedom to say NO is the most important Freedom in our lives. No I will not obey some artificially created master that think they can plan and choose my life decision. No I will not negotiate my Freedom for your convenience (also, such Freedom is anyway not real and it is matter of time where you will be blown away by such illusion). No I will not accept your credit because it has STRINGS attached to it which you either don't present or you blur it in mountain of superficial wording. No I will not implant a chip inside me for sake of your research or my convenience. No I will not have social account on media where majority of people are. No, I will not have pacemaker which is a blackbox with proprietary (buggy) software and it harvesting my data without me being able to look at it.

Yin-Yang. Yes, I want to collaborate on making world better place for us all. I don't agree with most of people, but that doesn't make them my enemies (although media would like us to feel and think like that). I will try to preserve everyones Freedom as much as I can. Yes I will share with my community and friends. Yes I want to learn from better than I am. Yes I want to have awesome mentors. Yes, I will try to be awesome mentor. Yes, I choose to care and not ignore facts and actions done by me and other people. Yes, I have the right to be imperfect and do mistakes as long as I will aknowledge and work on them. Bugfixing ourselves as humans is the most important task in our lives. As in software, it is very time consumable but also as in software, it is improvement and incredible satisfaction to see better version of yourself, getting more and more features (even if that sometimes means actually getting read of other/bad features).

This all is blending with my work at Purism. I spend a lot of time thinking about projects, development and future. I must do that in order not to make grave mistakes. Failing hardware and software is not grave mistake. Serious, but not grave. Grave is if we betray ourselves and our community in pursue for Freedom. We are trying to unify many things - we want to give you security, privacy and FREEDOM with convenience. So I am pushing myself out of comfort zones and also out of conventional and sometimes even my standard way of thinking. I have seen that non-existing infrastructure for PureOS is hurting is a lot but I needed to cope with it to the time where I will be able to say: not anymore, we are starting to build our own infrastructure. I was coping with Cloudflare being assholes to Tor users but now we also shifting away from them. I came to team where people didn't properly understand what and why are we building this. Came to very small and not that efficient team.

Now, we employed a dedicated and hard working person on operations (Goran) which I trust. We have dedicated support person (Mladen) which tries hard to work with people. A very creative visual mastermind (Francois). We have a capable Debian Developer (Matthias Klumpp) working on PureOS new infra. We have a capable and dedicated sysadmins (Theo and Stelio) which we didn't even have in past. We are trying to LEVEL UP Free software and unify them in convenient solution which is lead by Joey Hess. We have a hard-working PureOS developer (Hema) who is coping with current non-existent PureOS infra. We have GNOME Boards of Directors person (Jeff) who is trying to light up our image in world (working with James, to try bring some lights into our shadows caused by infinite supply chain delays). We have created Advisory Board for Freedom, Privacy and Security which I don't want to name now as we are preparing to announce soon that (and trust me, we have good people in here).

But, the most important thing here is not that they are all capable or cool people. It is the core value in all of them - they care about Freedom and I trust them on their paths. The trust is always important but in Purism it is essential for our work. I built the workflow without time management (everyone spends their time every single day as they see it fit as long as the work gets done). And we don't create insane short deadlines because everyone else thinks it is important (and rarely something is more important than our time freedom). So the trust is built out of knowledge and the knowledge I have about them and their works is because we freely share with no strings attached.

Because of them, and other good people from our community I have the energy to sacrifice my entire time for Purism. It is not white and black: CEO and me don't always agree, some members of my team don't always agree with me or I with them, some people in community are very rude, impolite and don't respect our work but even with disagreement everyone in Purism finds agreement at the end (we use facts in our judgments) and all the people who just try to disturb my and mine teams work aren't as efficient as all the lovely words of people who believe in us, who send us words of support and who share ideas and their thoughts with us. There is no more satisfaction for me than reading a personal mail giving us kudos for the work and their understanding of underlaying amount of work and issues.

While we are limited with resources we had an occasional outcry from community to help us. Now I want to help them to help me (you see the Freedom of sharing here?). PureOS has now a wiki. It will be a community wiki which is endorsed by Purism as company. Yes you read it right, Purism considers its community part of company (you don't need to get paycheck to be Purism member). That is why a call upon contributors (technical but mostly non-technical too) to help us make PureOS wiki the best resource on net for our needs. Write tutorials for others, gather and put info on wiki, create an ideas page and vote on them so we can see what community wants to see, chat with us so we all understand what, why and how are we working on things. Make it as transparent as possible. Everyone interested please get in touch with our teams by either poking us online (IRC, social accounts) or via emails (our personal or [hr, pr, feedback]@puri.sm.

To finish this writing (as it is 8am here and I still want to rest a bit because I will have meetings for 6 hours straight today) - I wanted to share some personal insight into few things from my point of view. I wanted to say despite all the troubles and people who tried to make our time even harder (and it is already hard by all the limitation which come naturally today with our kind of work), we still create products, we still ship them, we still improved step by step, we still hired and we are still building. Keeping all that together and making progress is for me a milestone greater than just creating a technical product. I just hope we will continue and improve our pace so we can start progressing towards my personal great goal - integrate and cooperate with most of FLOSS ecosystem.

P.S. yes, I also (finally!) became an official Debian Developer - still didn't have time to sit and properly think and cry (as every good men) about it.

Christian Perrier: [LIFE] Running activities - Ultra Trail du Mont-Blanc

22 August, 2016 - 12:47
Hello dear readers,

It's been ages since I last blogged. Being far less active in Debian than I've been in the past, I guess this is a logical consequence.

However, I'm still active as you may witness if you read the debian-boot mailing list : I still consider myself part of the D-I team and I'm maintaining a few sports-related packages.

Most know what has taken precedence over Debian development, namely trail and ultra-trail running. And, well, it hasn't decreased, far from that : I ran about 10 races already this year....6 of them being above 50km and I ran my favourite 100km moutain race in early July for the second year in a row.

So, the upcoming week, I'll be trying to reach what is usually considered as the Grail of ultra-trail runners : the Ultra-Trail du Mont-Blanc race in Chamonix.

The race is fairly simple : run all around the Mont-Blanc summits, for a 160km race with a bit less than 10,000 meters positive climb. The race itself takes place between 800 and 2700 meters (so no "high mountain") and I expect to complete it (if I succeed) in about 40 hours.

I'm very confident (maybe too much?) as I successfully completed a much more difficult race last year (only 144km, but over 11,000 meters positive climb and a much more difficult path...it took me over 50 hours to complete it).

You can follow me on the live tracking site. The race starts on Friday August 26th, 18:00 CET DST.

I everything goes well, I have great projects for next year, including a 100-mile race in Colorado in August (we'll be traveling in USA for over 3 weeks, peaking with the solar eclipse of August 21st in Kansas City).

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้