Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 21 min ago

Michal Čihař: Continuous integration on multiple platforms

22 August, 2016 - 17:00

Over the weekend I've played with continuous integration for Gammu to make it run on more platforms. I had to remember many things from the Windows world on the way and the solution is not yet complete, but the basic build is working, the only problematic part are external dependencies.

First of all we already have Linux builds on Travis CI. These cover compilation with both GCC and Clang compilers, hopefully covering most of the possible problems.

Recently I've added OS X builds on Travis CI, what was pretty much painless and worked out of the box.

The next major architecture to support is Windows. Once I've discovered AppVeyor I thought it might be the way to go. The have free plans for open-source projects (though it has only one parallel build compared to four provided by Travis CI).

As our build system is cross platform based on CMake, it should work pretty much out of the box, right? Well almost, tweaking the basics took some time (unfortunately there is no CMake support on AppVeyor, so you have to script it a bit).

The most painful things on the way:

  • finding our correct way to invoke build and testsuite
  • our code was broken on Windows, making the testsuite to fail
  • how to work with power shell (no, I'm not going to like it)
  • how to download and install executable to PATH
  • test output integration with AppVeyor - done using XSLT transformation and uploading test results manually
  • 32-bit / 64-bit mess, CMake happily finds 32-bit libs during the 64-bit build and vice versa, what makes the build fail later when linking - fixed by trying if code can be built with given library
  • 64-bit code crashes in dummy driver, causing testsuite failures (this has to be something Windows specific as the code works fine on 64-bit Linux) - this seems to be caused by too big allocations on stack, moving them to heap will fix this

You can check our current appveyor.yml in case you're going to try something similar. Current build results are on AppVeyor.

As a nice side effect, we now have up to date Windows binaries for Gammu.

Filed under: Debian English Gammu | 0 comments

NOKUBI Takatsugu: The 9th typhoon looks like Debian swirl logo

22 August, 2016 - 16:01

According to my follower’s tweet:

@kazken3 台風画像と水平反転したDebianマークが一致.. pic.twitter.com/ymBoRGz9ew

— kuromabo_(:3」∠)_ (@kuromabo) 2016年8月22日

The typhoon image and horizontal flipped Debian logo looks same.

Zlatan Todorić: When you wake up with a feeling

22 August, 2016 - 13:45

I woke up at 5am. Somehow made myself to soon go back to sleep again. Woke up at 6am. Such is the life of jet-lag. Or I am just getting old for it.

But the truth wouldn't be complete with only those assertion. I woke inspired and tired and the same time. Tired because I am doing very time consumable things. Also in the same time very emotional things. AND at the exact same time things that inspire me.

On paper, I am technical leader of Purism. In reality, I have insanely good relations with my CEO for such a short time. So good that I for months were not leading the technical shift only, but also I overtook operations (getting orders and delivering them while working with our assembly line to automate most of the tasks in this field). I was playing also as first line of technical support (forums, IRC and email). Actually I was pretty much the only line of support for few months. I was doing some website changes: change some wording, updating bunch of plugins and making it sure all works, resolved (hopefully) Tor and Cloudflare issues for it, annoying caching system for forums, stopped forum spam and so on. I worked on better messaging for Purism public relations. I thought my team to use keys for signing and encryption. I interviewed (and read all mails) for people that were interested in working or helping Purism. In process of doing all that, I maybe wasn't the most speedy person for all our users needs but I hope they understand and forgive me.

I was doing all that while I was researching and developing tablets (which ended up not being the most successful campaign but we now do have them as product). I was doing all that while seeing (and resolving) that our kernel builds were failing. Worked on pushing touchpad (not so good but we are still working on) patches upstream (and they ended being upstreamed). While seeing repos being down because of our host. Repos being down because of broken sync with Debian. Repos being down because of our key mis-management. Metadata not working well. PureBrowser getting broken all the time. Tor browser out of date. No real ISO updates. Wrong sources.list entries and so on.

And the hardest part on work was, I was doing all this with very limited scope and even more limited resources. So what kept me on, what is pushing me forward and what am I doing?

One philosophy - Free software. Let me not explain it as a technical debt. Let me explain it as social movement. In age, where people are "bombed" by media, by all-time lying politicians (which use fear of non-existent threats/terror as model to control population), in age where proprietary corporations are selling your freedom so you can gain temporary convenience the term Free software is like Giordano Bruno in age of Inquisitions. Free software does not only preserve your Freedom to software source usage but it preserves your Freedom to think and think out of the box and not being punished for that. It preserves the Freedom to live - to choose what and when to do, without having the negative impact on your or others people lives. The Freedom to be transparent and to share. Because not only ideas grow with sharing, but we, as human beings, grow as we share. The Freedom to say "NO".

NO. I somehow learnt, and personally think, that the Freedom to say NO is the most important Freedom in our lives. No I will not obey some artificially created master that think they can plan and choose my life decision. No I will not negotiate my Freedom for your convenience (also, such Freedom is anyway not real and it is matter of time where you will be blown away by such illusion). No I will not accept your credit because it has STRINGS attached to it which you either don't present or you blur it in mountain of superficial wording. No I will not implant a chip inside me for sake of your research or my convenience. No I will not have social account on media where majority of people are. No, I will not have pacemaker which is a blackbox with proprietary (buggy) software and it harvesting my data without me being able to look at it.

Yin-Yang. Yes, I want to collaborate on making world better place for us all. I don't agree with most of people, but that doesn't make them my enemies (although media would like us to feel and think like that). I will try to preserve everyones Freedom as much as I can. Yes I will share with my community and friends. Yes I want to learn from better than I am. Yes I want to have awesome mentors. Yes, I will try to be awesome mentor. Yes, I choose to care and not ignore facts and actions done by me and other people. Yes, I have the right to be imperfect and do mistakes as long as I will aknowledge and work on them. Bugfixing ourselves as humans is the most important task in our lives. As in software, it is very time consumable but also as in software, it is improvement and incredible satisfaction to see better version of yourself, getting more and more features (even if that sometimes means actually getting read of other/bad features).

This all is blending with my work at Purism. I spend a lot of time thinking about projects, development and future. I must do that in order not to make grave mistakes. Failing hardware and software is not grave mistake. Serious, but not grave. Grave is if we betray ourselves and our community in pursue for Freedom. We are trying to unify many things - we want to give you security, privacy and FREEDOM with convenience. So I am pushing myself out of comfort zones and also out of conventional and sometimes even my standard way of thinking. I have seen that non-existing infrastructure for PureOS is hurting is a lot but I needed to cope with it to the time where I will be able to say: not anymore, we are starting to build our own infrastructure. I was coping with Cloudflare being assholes to Tor users but now we also shifting away from them. I came to team where people didn't properly understand what and why are we building this. Came to very small and not that efficient team.

Now, we employed a dedicated and hard working person on operations (Goran) which I trust. We have dedicated support person (Mladen) which tries hard to work with people. A very creative visual mastermind (Francois). We have a capable Debian Developer (Matthias Klumpp) working on PureOS new infra. We have a capable and dedicated sysadmins (Theo and Stelio) which we didn't even have in past. We are trying to LEVEL UP Free software and unify them in convenient solution which is lead by Joey Hess. We have a hard-working PureOS developer (Hema) who is coping with current non-existent PureOS infra. We have GNOME Boards of Directors person (Jeff) who is trying to light up our image in world (working with James, to try bring some lights into our shadows caused by infinite supply chain delays). We have created Advisory Board for Freedom, Privacy and Security which I don't want to name now as we are preparing to announce soon that (and trust me, we have good people in here).

But, the most important thing here is not that they are all capable or cool people. It is the core value in all of them - they care about Freedom and I trust them on their paths. The trust is always important but in Purism it is essential for our work. I built the workflow without time management (everyone spends their time every single day as they see it fit as long as the work gets done). And we don't create insane short deadlines because everyone else thinks it is important (and rarely something is more important than our time freedom). So the trust is built out of knowledge and the knowledge I have about them and their works is because we freely share with no strings attached.

Because of them, and other good people from our community I have the energy to sacrifice my entire time for Purism. It is not white and black: CEO and me don't always agree, some members of my team don't always agree with me or I with them, some people in community are very rude, impolite and don't respect our work but even with disagreement everyone in Purism finds agreement at the end (we use facts in our judgments) and all the people who just try to disturb my and mine teams work aren't as efficient as all the lovely words of people who believe in us, who send us words of support and who share ideas and their thoughts with us. There is no more satisfaction for me than reading a personal mail giving us kudos for the work and their understanding of underlaying amount of work and issues.

While we are limited with resources we had an occasional outcry from community to help us. Now I want to help them to help me (you see the Freedom of sharing here?). PureOS has now a wiki. It will be a community wiki which is endorsed by Purism as company. Yes you read it right, Purism considers its community part of company (you don't need to get paycheck to be Purism member). That is why a call upon contributors (technical but mostly non-technical too) to help us make PureOS wiki the best resource on net for our needs. Write tutorials for others, gather and put info on wiki, create an ideas page and vote on them so we can see what community wants to see, chat with us so we all understand what, why and how are we working on things. Make it as transparent as possible. Everyone interested please get in touch with our teams by either poking us online (IRC, social accounts) or via emails (our personal or [hr, pr, feedback]@puri.sm.

To finish this writing (as it is 8am here and I still want to rest a bit because I will have meetings for 6 hours straight today) - I wanted to share some personal insight into few things from my point of view. I wanted to say despite all the troubles and people who tried to make our time even harder (and it is already hard by all the limitation which come naturally today with our kind of work), we still create products, we still ship them, we still improved step by step, we still hired and we are still building. Keeping all that together and making progress is for me a milestone greater than just creating a technical product. I just hope we will continue and improve our pace so we can start progressing towards my personal great goal - integrate and cooperate with most of FLOSS ecosystem.

P.S. yes, I also (finally!) became an official Debian Developer - still didn't have time to sit and properly think and cry (as every good men) about it.

Christian Perrier: [LIFE] Running activities - Ultra Trail du Mont-Blanc

22 August, 2016 - 12:47
Hello dear readers,

It's been ages since I last blogged. Being far less active in Debian than I've been in the past, I guess this is a logical consequence.

However, I'm still active as you may witness if you read the debian-boot mailing list : I still consider myself part of the D-I team and I'm maintaining a few sports-related packages.

Most know what has taken precedence over Debian development, namely trail and ultra-trail running. And, well, it hasn't decreased, far from that : I ran about 10 races already this year....6 of them being above 50km and I ran my favourite 100km moutain race in early July for the second year in a row.

So, the upcoming week, I'll be trying to reach what is usually considered as the Grail of ultra-trail runners : the Ultra-Trail du Mont-Blanc race in Chamonix.

The race is fairly simple : run all around the Mont-Blanc summits, for a 160km race with a bit less than 10,000 meters positive climb. The race itself takes place between 800 and 2700 meters (so no "high mountain") and I expect to complete it (if I succeed) in about 40 hours.

I'm very confident (maybe too much?) as I successfully completed a much more difficult race last year (only 144km, but over 11,000 meters positive climb and a much more difficult path...it took me over 50 hours to complete it).

You can follow me on the live tracking site. The race starts on Friday August 26th, 18:00 CET DST.

I everything goes well, I have great projects for next year, including a 100-mile race in Colorado in August (we'll be traveling in USA for over 3 weeks, peaking with the solar eclipse of August 21st in Kansas City).

Paul Tagliamonte: go-wmata - golang bindings to the DC metro system

22 August, 2016 - 09:16

A few weeks ago, I hacked up go-wmata, some golang bindings to the WMATA API. This is super handy if you are in the DC area, and want to interface to the WMATA data.

As a proof of concept, I wrote a yo bot called @WMATA, where it returns the closest station if you Yo it your location. For hilarity, feel free to Yo it from outside DC.

For added fun, and puns, I wrote a dbus proxy for the API as weel, at wmata-dbus, so you can query the next train over dbus. One thought was to make a GNOME Shell extension to tell me when the next train is. I’d love help with this (or pointers on how to learn how to do this right).

Cyril Brulebois: Freelance Debian consultant: running DEBAMAX

22 August, 2016 - 05:35
Executive summary

Since October 2015, I've been running a FLOSS consulting company, specialized on Debian, called DEBAMAX.

Longer version

Everything started two years ago. Back then I blogged about one of the biggest changes in my life: trying to find the right balance between volunteer work as a Debian Developer, and entrepreneurship as a Freelance Debian consultant. Big change because it meant giving up the comfort of the salaried world, and figuring out whether working this way would be sufficient to earn a living…

I experimented for a while under a simplified status. It comes with a number of limitations but that’s a huge win compared to France’s heavy company-related administrativia. Here’s what it looked like, everything being done online:

  • 1 registration form to begin with: wait a few days, get an identifier from INSEE, mention it in your invoices, there you go!

  • 4 tax forms a year: taxes can be declared monthly or quarterly, I went for the latter.

A number of things became quite clear after a few months:

  • I love this new job! Sharing my Debian knowledge with customers, and using it to help them build/improve/stabilise their products and their internal services feels great!

  • Even if I wasn't aware of that initially, it seems like I've got a decent network already: Debian Developers, former coworkers, and friends thought about me for their Debian-related tasks. It was nice to hear about their needs, say yes, sign paperwork, and start working right away!

  • While I'm trying really hard not to get too optimistic (achieving a given turnover on the first year doesn't mean you're guaranteed to do so again the following year), it seemed to go well enough for me to consider switching from this simplified status to a full-blown company.

Thankfully I was eligible to being accompanied by the local Chamber of Commerce and Industry (CCI Rennes), which provides teaching sessions for new entrepreneurs, coaching, and meeting opportunities (accountants, lawyers, insurance companies, …). Summer in France is traditionally rather quiet (read: almost everybody is on vacation), so DEBAMAX officially started operating in October 2015. Besides different administrative and accounting duties, running this company doesn't change the way I've been working since July 2014, so everything is fine!

As before, I won't be writing much about it through my personal blog, except for an occasional update every other year; if you want to follow what's happening with DEBAMAX:

  • Website: debamax.com — in addition to the usual company, services, and references sections, it features a blog (with RSS) where some missions are going to be detailed (when it makes sense to share and when customers are fine with it). Spoiler alert: Tails is likely to be the first success story there. ;)
  • Twitter: @debamax — which is going to be retweeted for a while from my personal account, @CyrilBrulebois.

Gregor Herrmann: RC bugs 2016/30-33

22 August, 2016 - 04:56

not much to report but I got at least some RC bugs fixed in the last weeks. again mostly perl stuff:

  • #759979 – src:simba: "simba: FTBFS: RoPkg::Rsync ...failed! (needed)"
    keep ExtUtils::AutoInstall from downlaoding stuff, upload to DELAYED/7
  • #817549 – src:libropkg-perl: "libropkg-perl: Removal of debhelper compat 4"
    use debhelper compatibility level 5, upload to DELAYED/7
  • #832599 – iodine: "Fails to start after upgrade"
    update service file and use deb-systemd-helper in postinst
  • #832832 – src:perlbrew: "perlbrew: FTBFS: Tests failures"
    add patch to deal with removed old perl version (pkg-perl)
  • #832833 – src:libtest-valgrind-perl: "libtest-valgrind-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #832853 – src:libmojomojo-perl: "libmojomojo-perl: FTBFS: Tests failures"
    close, the underlying problem is fixed (pkg-perl)
  • #832866 – src:libclass-c3-xs-perl: "libclass-c3-xs-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #834210 – libdancer-plugin-database-core-perl: "libdancer-plugin-database-perl: FTBFS: Failed 1/5 test programs. 6/45 subtests failed."
    upload new upstream release (pkg-perl)
  • #834793 – libgit-wrapper-perl: "libgit-wrapper-perl: FTBFS: t/basic.t whitespace changes"
    add patch from upstream bug (pkg-perl)

David Moreno: WIP: Perl bindings for Facebook Messenger

22 August, 2016 - 00:18

A couple of weeks ago I started looking into wrapping the Facebook Messenger API into Perl. Since all the calls are extremely simple using a REST API, I thought it could be easier and simpler even, to provide a small framework to hook bots using PSGI/Plack.

So I started putting some things together and with a very simple interface you could do a lot:

use strict;
use warnings;
use Facebook::Messenger::Bot;

my $bot = Facebook::Messenger::Bot->new({
    access_token   => '...',
    app_secret     => '...',
    verify_token   => '...'
});

$bot->register_hook_for('message', sub {
    my $bot = shift;
    my $message = shift;

    my $res = $bot->deliver({
        recipient => $message->sender,
        message => { text => "You said: " . $message->text() }
    });
    ...
});

$bot->spin();

You can hook a script like that as a .psgi file and plug it in to whatever you want.

Once you have some more decent user flow and whatnot, you can build something like:



…using a simple script like this one.

The work is not finished and not yet CPAN-ready but I’m posting this in case someone wants to join me in this mini-project or have suggestions, the work in progress is here.

Thanks!

David Moreno: Cosmetic changes to my posts archive

21 August, 2016 - 23:55

I’ve been doing a lot of cosmetic/layout changes to the nearly 750 posts in my blog’s archive. I apologize if this has broken some feed readers or aggregators. It appears like Hexo still needs better syndication support.

Dirk Eddelbuettel: RcppEigen 0.3.2.9.0

21 August, 2016 - 19:21

A new upstream release 3.2.9 of Eigen is now reflected in a new RcppEigen release 0.3.2.9.0 which got onto CRAN late yesterday and is now going into Debian. Once again, Yixuan Qiu did the heavy lifting of merging upstream (and two local twists we need to keep around). Another change is by James "coatless" Balamuta who added a row exporter.

The NEWS file entry follows.

Changes in RcppEigen version 0.3.2.9.0 (2016-08-20)
  • Updated to version 3.2.9 of Eigen (PR #37 by Yixuan closing #36 from Bob Carpenter of the Stan team)

  • An exporter for RowVectorX was added (thanks to PR #32 by James Balamuta)

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Daniel Stender: Collected notes from Python packaging

21 August, 2016 - 06:17

Here are some collected notes on some particular problems from packaging Python stuff for Debian, and more are coming up like this in the future. Some of the issues discussed here might be rather simple and even benign for the experienced packager, but maybe this is be helpful for people coming across the same issues for the first time, wondering what's going wrong. But some of the things discussed aren't easy. Here are the notes for this posting, there is no particular order.

UnicodeDecoreError on open() in Python 3 running in non-UTF-8 environments

I've came across this problem again recently packaging httpbin 0.5.0. The build breaks the following way e.g. trying to build with sbuild in a chroot, that's the first execution of setup.py with the default Python 3 interpreter:

I: pybuild base:184: python3.5 setup.py clean 
Traceback (most recent call last):
  File "setup.py", line 5, in <module>
    os.path.join(os.path.dirname(__file__), 'README.rst')).read()
  File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2386: ordinal not in range(128)
E: pybuild pybuild:274: clean: plugin distutils failed with: exit code=1: python3.5 setup.py clean 

One comes across UnicodeDecodeError-s quite oftenly on different occasions, mostly related to Python 2 packaging (like here). Here it's the case that in setup.py the long_description is tried to be read from the opened UTF-8 encoded file README.rst:

long_description = open(
    os.path.join(os.path.dirname(__file__), 'README.rst')).read()

This is a problem for Python 3.5 (and other Python 3 versions) when setup.py is executed by an interpreter run in a non-UTF-8 environment1:

$ LANG=C.UTF-8 python3.5 setup.py clean
running clean
$ LANG=C python3.5 setup.py clean
Traceback (most recent call last):
  File "setup.py", line 5, in <module>
    os.path.join(os.path.dirname(__file__), 'README.rst')).read()
  File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2386: ordinal not in range(128)
$ LANG=C python2.7 setup.py clean
running clean

The reason for this error is, the default encoding for file object returned by open() e.g. in Python 3.5 is platform dependent, so that read() fails on that when there's a mismatch:

>>> readme = open('README.rst')
>>> readme
<_io.TextIOWrapper name='README.rst' mode='r' encoding='ANSI_X3.4-1968'>

Non-UTF-8 build environments because $LANG isn't particularly set at all or set to C are common or even default in Debian packaging, like in the continuous integration resp. test building for reproducible builds the primary environment features that (see here). That's also true for the base system of the sbuild environment:

$ schroot -d / -c unstable-amd64-sbuild -u root
(unstable-amd64-sbuild)root@varuna:/# locale
LANG=
LANGUAGE=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
LC_COLLATE="POSIX"
LC_MONETARY="POSIX"
LC_MESSAGES="POSIX"
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=

A problem like this is solved mostly elegant by installing some workaround in debian/rules. A quick and easy fix is to add export LC_ALL=C.UTF-8 here, which supersedes the locale settings of the build environment. $LC_ALL is commonly used to change the existing locale settings, it overrides all other locale variables with the same value (see here). C.UTF-8 is an UTF-8 locale which is available by default in a base system, it could be used without installing the locales package (or worse, the huge locales-all package):

(unstable-amd64-sbuild)root@varuna:/# locale -a
C
C.UTF-8
POSIX

This problem ideally should be taken care of upstream. In Python 3, the default open() is io.open(), for which the specific encoding could be given, so that the UnicodeDecodeError vanishes. Python 2 uses os.open() for open(), which doesn't support that, but has io.open(), too. A fix which works for both Python branches goes like this:

import io
long_description = io.open(
    os.path.join(os.path.dirname(__file__), 'README.rst'), encoding='utf-8').read()
non-deterministic order of requirements in egg-info/requires.txt

This problem appeared in prospector/0.11.7-5 in the reproducible builds test builds, that was the first package of Prospector running on Python 32. It was revealed that there were differences in the sorting order of the [with_everything] dependencies resp. requirements in prospector-0.11.7.egg-info/requires.txt if the package was build on varying systems:

$ debbindiff b1/prospector_0.11.7-6_amd64.changes b2/prospector_0.11.7-6_amd64.changes 
{...}
├── prospector_0.11.7-6_all.deb
│   ├── file list
│   │ @@ -1,3 +1,3 @@
│   │  -rw-r--r--   0        0        0        4 2016-04-01 20:01:56.000000 debian-binary
│   │ --rw-r--r--   0        0        0     4343 2016-04-01 20:01:56.000000 control.tar.gz
│   │ +-rw-r--r--   0        0        0     4344 2016-04-01 20:01:56.000000 control.tar.gz
│   │  -rw-r--r--   0        0        0    74512 2016-04-01 20:01:56.000000 data.tar.xz
│   ├── control.tar.gz
│   │   ├── control.tar
│   │   │   ├── ./md5sums
│   │   │   │   ├── md5sums
│   │   │   │   │┄ Files in package differ
│   ├── data.tar.xz
│   │   ├── data.tar
│   │   │   ├── ./usr/share/prospector/prospector-0.11.7.egg-info/requires.txt
│   │   │   │ @@ -1,12 +1,12 @@
│   │   │   │  
│   │   │   │  [with_everything]
│   │   │   │ +pyroma>=1.6,<2.0
│   │   │   │  frosted>=1.4.1
│   │   │   │  vulture>=0.6
│   │   │   │ -pyroma>=1.6,<2.0

Reproducible builds folks recognized this to be a toolchain problem and set up the issue randonmness_in_python_setuptools_requires.txt to cover this. Plus, a wishlist bug against python-setuptools was filed to fix this. The patch which was provided by Chris Lamb adds sorting of dependencies in requires.txt for Setuptools by adding sorted() (stdlib) to _write_requirements() in command/egg_info.py:

--- a/setuptools/command/egg_info.py
+++ b/setuptools/command/egg_info.py
@@ -406,7 +406,7 @@ def warn_depends_obsolete(cmd, basename, filename):
 def _write_requirements(stream, reqs):
     lines = yield_lines(reqs or ())
     append_cr = lambda line: line + '\n'
-    lines = map(append_cr, lines)
+    lines = map(append_cr, sorted(lines))
     stream.writelines(lines)

O.k. In the toolchain, no instance sorts these requirements properly if differences appear, but what is the reason for these differences in the Prospector packages, though? The problem is somewhat subtle. In setup.py, [with_everything] is a dictionary entry of _OPTIONAL (which is used for extras_require) that is created by a list comprehension out of the other values in that dictionary. The code goes like this:

_OPTIONAL = {
    'with_frosted': ('frosted>=1.4.1',),
    'with_vulture': ('vulture>=0.6',),
    'with_pyroma': ('pyroma>=1.6,<2.0',),
    'with_pep257': (),  # note: this is no longer optional, so this option will be removed in a future release
}
_OPTIONAL['with_everything'] = [req for req_list in _OPTIONAL.values() for req in req_list]

The result, the new _OPTIONAL dictionary including the key with_everything (which w/o further sorting is the source of the list of requirements requires.txt) e.g. looks like this (code snipped run through IPython):

In [3]: _OPTIONAL
Out[3]: 
{'with_everything': ['vulture>=0.6', 'pyroma>=1.6,<2.0', 'frosted>=1.4.1'],
 'with_vulture': ('vulture>=0.6',),
 'with_pyroma': ('pyroma>=1.6,<2.0',),
 'with_frosted': ('frosted>=1.4.1',),
 'with_pep257': ()}

That list comprehension iterates over the other dictionary entries to gather the value of with_everything, and – Python programmers know that of course – dictionaries are not indexed and therefore the order of the key-value pairs isn't fixed, but is determined by certain conditions from outside the interpreter. That's the source for the non-determinism of this Debian package revision of Prospector.

This problem has been fixed by a patch, which just presorts the list of requirements before it gets added to _OPTIONAL:

@@ -76,8 +76,8 @@
     'with_pyroma': ('pyroma>=1.6,<2.0',),
     'with_pep257': (),  # note: this is no longer optional, so this option will be removed in a future release
 }
-_OPTIONAL['with_everything'] = [req for req_list in _OPTIONAL.values() for req in req_list]
-
+with_everything = [req for req_list in _OPTIONAL.values() for req in req_list]
+_OPTIONAL['with_everything'] = sorted(with_everything)

In comparison to the list method sort(), the function sorted() does not change iterables in-place, but returns a new object, both could be used. As a side note, egg-info/requires.txt isn't even needed, but that's another issue.

  1. As an alternative to prefixing LC_ALL, env -i could be used to get an empty environment. 

  2. 0.11.7-4 already but this package revision was in experimental due to the switch to Python 3 and therefore not tested by reproducible builds continuous integration. 

Francois Marier: Remplacer un disque RAID défectueux

21 August, 2016 - 05:00

Traduction de l'article original anglais à https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/.

Voici la procédure que j'ai suivi pour remplacer un disque RAID défectueux sur une machine Debian.

Remplacer le disque

Après avoir remarqué que /dev/sdb a été expulsé de mon RAID, j'ai utilisé smartmontools pour identifier le numéro de série du disque à retirer :

smartctl -a /dev/sdb

Cette information en main, j'ai fermé l'ordinateur, retiré le disque défectueux et mis un nouveau disque vide à la place.

Initialiser le nouveau disque

Après avoir démarré avec le nouveau disque vide, j'ai copié la table de partitions avec parted.

Premièrement, j'ai examiné la table de partitions sur le disque dur non-défectueux :

$ parted /dev/sda
unit s
print

et créé une nouvelle table de partitions sur le disque de remplacement :

$ parted /dev/sdb
unit s
mktable gpt

Ensuite j'ai utilisé la commande mkpart pour mes 4 partitions et je leur ai toutes donné la même taille que les partitions équivalentes sur /dev/sda.

Finalement, j'ai utilisé les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (où X est le numéro de la partition) pour toutes les partitions RAID, avant de vérifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recréer les RAID

Pour synchroniser les données du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai exécuté les commandes suivantes sur mes partitions RAID1 :

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

et j'ai gardé un oeil sur le statut de la synchronisation avec :

watch -n 2 cat /proc/mdstat

Pour accélérer le processus, j'ai utilisé le truc suivant :

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Ensuite, j'ai recréé ma partition swap RAID0 comme suit :

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-créer complètement), j'ai dû faire deux choses:

  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donné par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourné pour /dev/md1 par la commande mdadm --detail --scan
S'assurer que l'on peut démarrer avec le disque de remplacement

Pour être certain de bien pouvoir démarrer la machine avec n'importe quel des deux disques, j'ai réinstallé le boot loader grub sur le nouveau disque :

grub-install /dev/sdb

avant de redémarrer avec les deux disques connectés. Ceci confirme que ma configuration fonctionne bien.

Ensuite, j'ai démarré sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque décidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb).

Ce test brise évidemment la synchronisation entre les deux disques, donc j'ai dû redémarrer avec les deux disques connectés et puis ré-ajouter /dev/sda à tous les RAID1 :

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Une fois le tout fini, j'ai redémarrer à nouveau avec les deux disques pour confirmer que tout fonctionne bien :

cat /proc/mdstat

et j'ai ensuite exécuter un test SMART complet sur le nouveau disque :

smartctl -t long /dev/sdb

Francois Marier: Remplacer un disque RAID défectueux

21 August, 2016 - 05:00

Traduction de l'article original anglais à https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/.

Voici la procédure que j'ai suivi pour remplacer un disque RAID défectueux sur une machine Debian.

Remplacer le disque

Après avoir remarqué que /dev/sdb a été expulsé de mon RAID, j'ai utilisé smartmontools pour identifier le numéro de série du disque à retirer :

smartctl -a /dev/sdb

Cette information en main, j'ai fermé l'ordinateur, retiré le disque défectueux et mis un nouveau disque vide à la place.

Initialiser le nouveau disque

Après avoir démarré avec le nouveau disque vide, j'ai copié la table de partitions avec parted.

Premièrement, j'ai examiné la table de partitions sur le disque dur non-défectueux :

$ parted /dev/sda
unit s
print

et créé une nouvelle table de partitions sur le disque de remplacement :

$ parted /dev/sdb
unit s
mktable gpt

Ensuite j'ai utilisé la commande mkpart pour mes 4 partitions et je leur ai toutes donné la même taille que les partitions équivalentes sur /dev/sda.

Finalement, j'ai utilisé les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (où X est le numéro de la partition) pour toutes les partitions RAID, avant de vérifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recréer les RAID

Pour synchroniser les données du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai exécuté les commandes suivantes sur mes partitions RAID1 :

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

et j'ai gardé un oeil sur le statut de la synchronisation avec :

watch -n 2 cat /proc/mdstat

Pour accélérer le processus, j'ai utilisé le truc suivant :

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Ensuite, j'ai recréé ma partition swap RAID0 comme suit :

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-créer complètement), j'ai dû faire deux choses:

  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donné par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourné pour /dev/md1 par la commande mdadm --detail --scan
S'assurer que l'on peut démarrer avec le disque de remplacement

Pour être certain de bien pouvoir démarrer la machine avec n'importe quel des deux disques, j'ai réinstallé le boot loader grub sur le nouveau disque :

grub-install /dev/sdb

avant de redémarrer avec les deux disques connectés. Ceci confirme que ma configuration fonctionne bien.

Ensuite, j'ai démarré sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque décidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb).

Ce test brise évidemment la synchronisation entre les deux disques, donc j'ai dû redémarrer avec les deux disques connectés et puis ré-ajouter /dev/sda à tous les RAID1 :

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Une fois le tout fini, j'ai redémarrer à nouveau avec les deux disques pour confirmer que tout fonctionne bien :

cat /proc/mdstat

et j'ai ensuite exécuter un test SMART complet sur le nouveau disque :

smartctl -t long /dev/sdb

Jose M. Calhariz: Availabilty of at at the Major Linux Distributions

21 August, 2016 - 01:11

In this blog post I will cover what versions of software at is used by the leading Linux Distributions as reported by LWN.

Also

Currently some distributions are lagging on the use of the latest at software.

Russell Coker: Basics of Backups

20 August, 2016 - 13:04

I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.

Essential Requirements

Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.

Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).

It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).

Backup Devices

USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.

Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.

The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.

With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.

Multiple Backups

It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.

To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.

For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.

Offsite Backups

One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!

Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.

If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.

A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.

A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.

Testing

On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.

Secret Data

Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.

One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.

Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.

One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.

Backup Corruption

With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.

Any Suggestions?

If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.

Related posts:

  1. No Backups WTF Some years ago I was working on a project that...
  2. Basics of EC2 I have previously written about my work packaging the tools...
  3. document storage I have been asked for advice about long-term storage of...

Joey Hess: keysafe alpha release

20 August, 2016 - 06:48

Keysafe securely backs up a gpg secret key or other short secret to the cloud. But not yet. Today's alpha release only supports storing the data locally, and I still need to finish tuning the argon2 hash difficulties with modern hardware. Other than that, I'm fairly happy with how it's turned out.

Keysafe is written in Haskell, and many of the data types in it keep track of the estimated CPU time needed to create, decrypt, and brute-force them. Running that through a AWS SPOT pricing cost model lets keysafe estimate how much an attacker would need to spend to crack your password.


(Above is for the password "makesad spindle stick")

If you'd like to be an early adopter, install it like this:

sudo apt-get install haskell-stack libreadline-dev libargon2-0-dev zenity
stack install keysafe

Run ~/.local/bin/keysafe --backup --store-local to back up a gpg key to ~/.keysafe/objects/local/

I still need to tune the argon2 hash difficulty, and I need benchmark data to do so. If you have a top of the line laptop or server class machine that's less than a year old, send me a benchmark:

~/.local/bin/keysafe --benchmark | mail keysafe@joeyh.name -s benchmark

Bonus announcement: http://hackage.haskell.org/package/zxcvbn-c/ is my quick Haskell interface to the C version of the zxcvbn password strength estimation library.

PS: Past 50% of my goal on Patreon!

Dirk Eddelbuettel: RQuantLib 0.4.3: Lots of new Fixed Income functions

20 August, 2016 - 04:18

A release of RQuantLib is now on CRAN and in Debian. It contains a lot of new code contributed by Terry Leitch over a number of pull requests. See below for full details but the changes focus on Fixed Income and Fixed Income Derivatives, and cover swap, discount curves, swaptions and more.

In the blog post for the previous release 0.4.2, we noted that a volunteer was needed for a new Windows library build of QuantLib for Windows to replace the outdated version 1.6 used there. Josh Ulrich stepped up, and built them. Josh and I tried for several month to get the win-builder to install these, but sadly other things took priority and we were unsuccessful. So this release will not have Windows binaries on CRAN as QuantLib 1.8 is not available there. Instead, you can use the ghrr drat and do

if (!require("drat")) install.packages("drat")
drat::addRepo("ghrr")
install.packages("RQuantLib")

to fetch prebuilt Windows binaries from the ghrr drat. Everybody else gets sources from CRAN.

The full changes are detailed below. Changes in RQuantLib version 0.4.3 (2016-08-19)
  • Changes in RQuantLib code:

    • Discount curve creation has been made more general by allowing additional arguments for day counter and fixed and floating frequency (contributed by Terry Leitch in #31, plus some work by Dirk in #32).

    • Swap leg parameters are now in combined variable and allow textual description (Terry Leitch in #34 and #35)

    • BermudanSwaption has been modfied to take option expiration and swap tenors in order to enable more general swaption structure pricing; a more general search for the swaptions was developed to accomodate this. Also, a DiscountCurve is allowed as an alternative to market quotes to reduce computation time for a portfolio on a given valuation date (Terry Leitch in #42 closing issue #41).

    • A new AffineSwaption model was added with similar interface to BermudanSwaption but allowing for valuation of a European exercise swaption utlizing the same affine methods available in BermudanSwaption. AffineSwaption will also value a Bermudan swaption, but does not take rate market quotes to build a term structure and a DiscountCurve object is required (Terry Leitch in #43).

    • Swap tenors can now be defined up to 100 years (Terry Leitch in #48 fising issue #46).

    • Additional (shorter term) swap tenors are now defined (Guillaume Horel in #49, #54, #55).

    • New SABR swaption pricer (Terry Leitch in #60 and #64, small follow-up by Dirk in #65).

    • Use of Travis CI has been updated and switch to maintained fork of deprecated mainline.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list off the R-Forge page. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Simon Désaulniers: [GSOC] Final report

19 August, 2016 - 23:04



The Google Summer of Code is now over. It has been a great experience and I’m very glad I’ve been able to make it. I’ve had the pleasure to contribute to a project showing very good promise for the future of communication: Ring. The words “privacy” and “freedom” in terms of technologies are being more and more present in the mind of people. All sorts of projects wanting to achieve these goals are coming to life each days like decentralized web networks (ZeroNet for e.g.), blockchain based applications, etc.

Debian

I’ve had the great opportunity to go to the Debian Conference 2016. I’ve been introduced to the debian community and debian developpers (“dd” in short :p). I was lucky to meet with great people like the president of the FSF, John Sulivan. You can have a look at my Debian conference report here.

If you want to read my debian reports, you can do so by browsing the “Google Summer Of Code” category on this blog.

What I have done

Ring is now in official debian repositories since June 30th. This is a good news for the GNU/Linux community. I’m proud to say that I’ve been able to contribute to debian by working on OpenDHT and developing new functionalities to reduce network traffic. The goal behind this was to finally optimize the data persistence traffic consumption on the DHT.

Github repository: https://github.com/savoirfairelinux/opendht

Queries

Issues:

  • #43: DHT queries

Pull requests:

  • #79: [DHT] Queries: remote values filtering
  • 93: dht: return consistent query from local storage
  • #106: [dht] rework get timings after queries in master
Value pagination

Issues:

  • #71: [DHT] value pagination

Pull requests:

  • #110: dht: Value pagination using queries
  • #113: dht: value pagination fix
Indexation (feat. Nicolas Reynaud)

Pull requests:

  • #77: pht: fix invalid comparison, inexact match lookup
  • #78: [PHT] Key consistency
General maintenance of OpenDHT

Issues:

  • #72: Packaging issue for Python bindings with CMake: $DESTDIR not honored
  • #75: Different libraries built with Autotools and CMake
  • #87: OpenDHT does not build on armel
  • #92: [DhtScanner] doesn’t compile on LLVM 7.0.2
  • #99: 0.6.2 filenames in 0.6.3

Pull requests:

  • #73: dht: consider IPv4 or IPv6 disconnected on operation done
  • #74: [packaging] support python installation with make DESTDIR=$DIR
  • #84: [dhtnode] user experience
  • #94: dht: make main store a vector>
  • #94: autotools: versionning consistent with CMake
  • #103: dht: fix sendListen loop bug
  • #106: dht: more accurate name for requested nodes count
  • #108: dht: unify bootstrapSearch and refill method using node cache
View by commits

You can have a look at my work by commits just by clicking this link: https://github.com/savoirfairelinux/opendht/commits/master?author=sim590

What’s left to be done Data persistence

The only thing left before achieving the totality of my work is to rigorously test the data persistence behavior to demonstrate the network traffic reduction. To do so we use our benchmark python module. We are able to analyse traffic and produce plots like this one:


Plot: 32 nodes, 1600 values with normal condition test.

This particular plot was drawn before the enhancements. We are confident to improve the results using my work produced during the GSOC.

TCP

In the middle of the GSOC, we soon realized that passing from UDP to TCP would be ask too much efforts in too short lapse of time. Also, it is not yet cleared if we should really do that.

Olivier Grégoire: Conclusion Google Summer of Code 2016

19 August, 2016 - 19:57

SmartInfo project with Debian

1. Me

Before getting into the thick of my project, let me present myself:
I am Olivier Grégoire (Gasuleg), and I study IT engineering at École de Technologie supérieure in Montreal.
I am an technician in electronics, and I began object-oriented programming just last year.
I applied to GSoC because I loved the concept of the project that I would work on and I really wanted to be part of it. I also wanted to discover the word of the free software.

2. My Project

During this GSoC, I worked on the Ring project.

“Ring is a free software for communication that allows its users to make audio or video calls, in pairs or groups, and to send messages, safely and freely, in confidence.

Savoir-faire Linux and a community of contributors worldwide develop Ring. It is available on GNU/Linux, Windows, Mac OSX and Android. It can be associated with a conventional phone service or integrated with any connected object.

Under this very easy to use software, there is a combination of technologies and innovations opening all kinds of perspectives to its users and developers.

Ring is a free software whose code is open. Therefore, it is not the software that controls you.

With Ring, you take control of your communication!

Ring is an open source software under the GPL v3 license. Everyone can verify the codes and propose new ones to improve the software’s performace. It is a guarantee of transparency and freedom for everyone!”
Source: ring.cx

The problem is about the typical user of Ring, the one who don’t use the terminal to launch Ring. He has no information about what has happened in the system. My goal is to create a tool that display statistics of Ring.

3. Quick Explanation of What My Program Can Do

The Code

Here are the links to the code I was working on all throughout the Google Summer of Code (You can see what I have done after the GSoC by clicking on the newest patchs):

Patch Status Daemon On Review Lib Ring Client (LRC) On Review Gnome client On review Remove unused code   Merged

!!!!!CHANGE LINK TO PUT THE LATEST PATCHES BEFORE THE END OF GSOC!!!!!

What Can Be Displayed?
This is the final list of information I can display and some ideas on what information we could display in the future:

Information     Details Done? Call ID The identification number of the call Yes Resolution Local and remote Yes Framerate Local and remote Yes Codec Audio and video in local and remote     Yes Bandwidth Download and upload No Performance use CPU, GPU, RAM No Security level In SIP call No Connection time   No Packets lost   No


To launch it you need to right click on the call and click on “Show advanced information”.

To stop it, same thing: right click on the call and click on “Hide advanced information”.

4. More Details About My Project

My program needs to retrieve information from the daemon (LibRing) and then display it in gnome client. So, I needed to create a patch for the daemon, the D-Bus layer (in the daemon patch), LibRingClient and the GNU/Linux (Gnome) client.

This is what the architecture of the project looks like.

source: ring.cx

And this is how I implemented my project.

5. Future of the Project

  • Add background on the gnome client
  • Implement the API smartInfoHub in all the other clients
  • Gather more information, such as bandwidth, resource consumption, security level, connection time, number of packets lost and anything else that could be deemed interesting
  • Display information for every participant in a conference call. I began to implement it for the daemon on patch set 25 .

Weekly report link

Thanks

I would like to thank the following:
- The Google Summer of Code organisation, for this wonderful experience.
- Debian, for accepting my project proposal and letting me embark on this fantastic adventure.
- My mentor, Mr Guillaume Roguez, and all his team, for being there to help me.

Olivier Grégoire: Conclusion Google Summer of Code 2016

19 August, 2016 - 19:57

SmartInfo project with Debian

1. Me

Before getting into the thick of my project, let me present myself:
I am Olivier Grégoire (Gasuleg), and I study IT engineering at École de Technologie supérieure in Montreal.
I am an technician in electronics, and I began object-oriented programming just last year.
I applied to GSoC because I loved the concept of the project that I would work on and I really wanted to be part of it. I also wanted to discover the word of the free software.

2. My Project

During this GSoC, I worked on the Ring project.

“Ring is a free software for communication that allows its users to make audio or video calls, in pairs or groups, and to send messages, safely and freely, in confidence.

Savoir-faire Linux and a community of contributors worldwide develop Ring. It is available on GNU/Linux, Windows, Mac OSX and Android. It can be associated with a conventional phone service or integrated with any connected object.

Under this very easy to use software, there is a combination of technologies and innovations opening all kinds of perspectives to its users and developers.

Ring is a free software whose code is open. Therefore, it is not the software that controls you.

With Ring, you take control of your communication!

Ring is an open source software under the GPL v3 license. Everyone can verify the codes and propose new ones to improve the software’s performace. It is a guarantee of transparency and freedom for everyone!”
Source: ring.cx

The problem is about the typical user of Ring, the one who don’t use the terminal to launch Ring. He has no information about what has happened in the system. My goal is to create a tool that display statistics of Ring.

3. Quick Explanation of What My Program Can Do

The Code

Here are the links to the code I was working on all throughout the Google Summer of Code (You can see what I have done after the GSoC by clicking on the newest patchs):

Patch Status Daemon On Review Lib Ring Client (LRC) On Review Gnome client On review Remove unused code   Merged

!!!!!CHANGE LINK TO PUT THE LATEST PATCHES BEFORE THE END OF GSOC!!!!!

What Can Be Displayed?
This is the final list of information I can display and some ideas on what information we could display in the future:

Information     Details Done? Call ID The identification number of the call Yes Resolution Local and remote Yes Framerate Local and remote Yes Codec Audio and video in local and remote     Yes Bandwidth Download and upload No Performance use CPU, GPU, RAM No Security level In SIP call No Connection time   No Packets lost   No


To launch it you need to right click on the call and click on “Show advanced information”.

To stop it, same thing: right click on the call and click on “Hide advanced information”.

4. More Details About My Project

My program needs to retrieve information from the daemon (LibRing) and then display it in gnome client. So, I needed to create a patch for the daemon, the D-Bus layer (in the daemon patch), LibRingClient and the GNU/Linux (Gnome) client.

This is what the architecture of the project looks like.

source: ring.cx

And this is how I implemented my project.

5. Future of the Project

  • Gather more information, such as bandwidth, resource consumption, security level, connection time, number of packets lost and anything else that could be deemed interesting
  • Display information for every participant in a conference call. I began to implement it for the daemon on patch set 25 .

6. Thanks

I would like to thank the following:
- The Google Summer of Code organisation, for this wonderful experience.
- Debian, for accepting my project proposal and letting me embark on this fantastic adventure.
- My mentor, Mr Guillaume Roguez, and all his team, for being there to help me.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้