Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 19 min 42 sec ago

Jonas Meurer: debian lts report 2017.03

3 April, 2017 - 20:37
Debian LTS report for March 2017

March 2017 was my seventh month as a Debian LTS team member. I was allocated 14,75 hours and spent 11,25 of them on the following tasks:

  • DLA 836-2: Regression update for munin
  • DLA 869-1: Several security fixes for cgiemail
  • tested packaged prepared by other LTS team members against regressions
  • libical: bug triaging, testing reproducers
  • putty: tested reproducers, backported patches (not finished yet)
Links

Enrico Zini: Free Software on my phone

3 April, 2017 - 16:15

I try to run my phone on Free Software as much as I can.

I recently switched to LineageOS. I took it as an opportunity to do a full factory wipe and reinstall, to simulate a disaster recovery.

Here's a summary of the basic software I use:

Sean Whitton: A different reason there are so few tenure-track jobs in philosophy

3 April, 2017 - 08:05

Recently I heard a different reason suggested as to why there are fewer and fewer tenure-track jobs in philosophy. University administrators are taking control of the tenure review process; previously departments made decisions and the administrators rubber-stamped them. The result of this is that it is easier to get tenure. This is because university administrators grant tenure based on quantitively-measurable achievements, rather than a qualitative assessment of the candidate qua philosopher. If a department thought that someone shouldn’t get tenure, the administration might turn around and say that they are going to grant it because the candidate has fulfilled such-and-such requirements.

Since it is easier to get tenure, hiring someone at the assistant professor level is much riskier for a philosophy department: they have to assume the candidate will get tenure. So the pre-tenure phase is no longer a probationary period. That is being pushed onto post-docs and graduate students. This results in the intellectual maturity of published work going down.

There are various assumptions in the above that could be questioned, but what’s interesting is that it takes a lot of the blame for the current situation off the shoulders of faculty members (there have been accusations that they are not doing enough). If tenure-track hires are a bigger risk for the quality of the academic philosophers who end up with permanent jobs, it is good that they are averse to that risk.

Dirk Eddelbuettel: RApiDatetime 0.0.3

3 April, 2017 - 07:55

A brown bag bug fix release 0.0.3 of RApiDatetime is now on CRAN.

RApiDatetime provides six entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. These six functions are all fairly essential and useful, but not one of them was previously exported by R.

I left two undefined variables in calls in the exported header file; this only become an issue once I actually tried accessing the API from another package as I am now doing in anytime.

Changes in RApiDatetime version 0.0.3 (2017-04-02)
  • Correct two simple copy-and-paste errors in RApiDatetime.h

  • Also enable registration in useDynLib, and explicitly export known and documented R access functions provided for testing

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the rapidatetime page.

For questions or comments please use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Eugene V. Lyubimkin: experiment: optionalising shared libraries without dl_open via generating stub libraries

2 April, 2017 - 21:11
Reading the discussion on debian-devel about shared library dependencies in C/C++, I wondered if it's possible to link with a shared library without having an absolute dependency on it.

The issue comes often when one has a piece of software which could use extra functionality the shared library (let's call it biglib) provides, but the developer/maintainer doesn't want to force all users to install it. The common solution seems to be either:

- defining a plugin interface and a bridge library (better detailed at here);

- dlopen/dlsym to load each library symbol by hand (good luck doing that for high-level C++ libraries).


Both solutions involve dlopen at one stage or another to avoid linking with a biglib, since if you do that, the application loader will refuse to load the application unless biglib and all of its dependencies are present. But what if application is prepared to fallback at run time, and just needs a way to be able to start without biglib?

I went ahead to check what if there was a stub library to provide all symbols which the application uses (directly or indirectly) out of biglib. Turns out that yes, at least for simple cases it seems to work. I've published my experimental stub generator at https://github.com/jackyf/so-stub .

For practical use, we'd also need a way to tell the loader that a stub library has to be loaded if the real library is not found. While there is many ways to instruct the loader to load something instead of system libraries (LD_PRELOAD, LD_LIBRARY_PATH, runpath, rpath), I found no way to load something if a system library was not found. In other words, I'd like to say "dear linker/loader, when you're loading myapp: if you didn't find libfoo1.so.4 in any of system directories configured, try also at /usr/lib/myapp/stubs/ (which'd contain stubbed libfoo1.so.4)". Apparently, nothing like "rpath-fallback" exists right now. I'm considering creating a feature request for such a "rpath-fallback" if the "optional libraries via stubs" idea gets wider support.

Ben Hutchings: Debian LTS work, March 2017

2 April, 2017 - 09:57

I was assigned 14.75 hours of work by Freexian's Debian LTS initiative and worked all of these hours.

I prepared a security update for the Linux kernel and issued DLA 849-1. I also continued catching up with the backlog of fixes for the Linux 3.2 longterm stable branch. I released stable update 3.2.87 and started preparing the next stable update.

Mike Hommey: git-cinnabar experimental features

2 April, 2017 - 05:54

Since version 0.4.0, git-cinnabar has a few hidden experimental features. Two of them are available in 0.4.0, and a third was recently added on the master branch.

The basic mechanism to enable experimental features is to set a preference in the git configuration with a comma-separated list of features to enable, or all, for all of them. That preference is cinnabar.experiments.

Any means to set a git configuration can be used. You can:

  • Add the following to .git/config:
    [cinnabar]
    experiments=feature
    
  • Or run the following command:
    $ git config cinnabar.experiments feature
    
  • Or only enable the feature temporarily for a given command:
    $ git -c cinnabar.experiments=feature command arguments
    

But what features are there?

wire

In order to talk to Mercurial repositories, git-cinnabar normally uses mercurial python modules. This experimental feature allows to access Mercurial repositories without using the mercurial python modules. It then relies on git-cinnabar-helper to connect to the repository through the mercurial wire protocol.

As of version 0.4.0, the feature is automatically enabled when Mercurial is not installed.

merge

Git-cinnabar currently doesn’t allow to push merge commits. The main reason for this is that generating the correct mercurial data for those merges is tricky, and needs to be gotten right.

In version 0.4.0, enabling this feature allows to push merge commits as long as the parent commits are available on the mercurial repository. If they aren’t, you need to first push them independently, and then push the merge.

On current master, that limitation doesn’t exist anymore ; you can just push everything in one go.

The main caveat with this experimental support for pushing merges is that it currently doesn’t handle the case where a file was moved on one of the branches the same way mercurial would (i.e. the information would be lost to mercurial users).

clonebundles

As of mercurial 3.6, Mercurial servers can opt-in to providing pre-generated bundles, which, when clients support it, takes CPU load off the server when a clone is performed. Good for servers, and usually good for clients too when they have a fast network connection, because downloading a pre-generated bundle is usually faster than waiting for the server to generate one.

As of a few days ago, the master branch of git-cinnabar supports cloning using those pre-generated bundles, provided the server advertizes them (mozilla-central does).

Antoine Beaupré: My free software activities, February and March 2017

2 April, 2017 - 05:51
Looking into self-financing

Before I begin, I should mention that I started tracking my time working on free software more systematically. I spend a lot of time on the computer, as regular readers of this blog might remember so I wanted to know exactly how much time was paid vs free work. I was already using org-mode's time clock system to keep track of my work hours, so I just extended this to my regular free software contributions, which also helps in writing those reports.

It turns out that over 60% of my computer time is spent working on free software. That's huge! I was expecting something more along the range of 20 to 40% of my time. So I started thinking about ways of financing this work. I created a Patreon page but I'm hesitant into launching such a campaign: the only thing worse than "no patreon page" is "a patreon page with failed goals and no one financing it". So before starting such an effort, I'd like to get a feeling of what other people's experience with it are. I know that joeyh is close to achieving his goals, but I can't compare with the guy that invented git-annex or debhelper, so I'm concerned I wouldn't be able to raise the same level of funding.

So any advice you have, feel free to contact me in private or in the comments. If you would be ready to fund my work, I'd love to know about it, obviously, but I guess I wouldn't get real numbers until I actually open up such a page...

Now, onto the regular report.

Wallabako

I spent a good chunk of time completing most of the things I had in mind for Wallabako, which I mentioned quickly in the previous report. Wallabako is now much easier to installed, with clearer instructions, an easier to use configuration file, more reliable synchronization and read status propagation. As usual the Wallabako README file has all the details.

I've also looked at better integration with Koreader, the free software e-reader that forms the basis of the okreader free software distribution which has been able to port Debian to the Kobo e-readers, a project I am really excited about. This project has the potential of supporting Kobo readers beyond the lifetime that upstream grants it and removes a lot of proprietary software and spyware that ships with the Kobo readers. So I have made a few contributions to okreader and also on koreader, the ebook reader okreader is based on.

Stressant

I rewrote stressant, my simple burn-in and stress-testing tool. After struggling in turn with Debirf, live-build, vmdebootstrap and even FAI, I just figured maybe it wasn't the best idea to try and reinvent that particular wheel: instead of reinventing how to build yet another Debian system build tool, maybe I should just reuse what's already there.

It turns out there's a well known, succesful and fairly complete recovery system called Grml. It is a Debian Derivative, so all I needed to do was to stop procrastinating and actually write the actual stressant tool instead of just creating a distribution with a bunch of random tools shipped in. This allowed me to focus on which tools were the best to stress test different components. This selection ended up being:

fio can also be used to overwrite disk drives with the proper options (--overwrite and --size=100%), although grml also ships with nwipe for wiping old spinning disks and hdparm to do a secure erase of SSD disks (whatever that's worth).

Stressant still needs to be shipped with grml for this transition to be complete. In the meantime, I was able to configure the excellent public Gitlab CI service to provide ISO images with Stressant built-in as a stopgap measure. I also need to figure out a way to automate starting stressant from a boot menu to automate deployments on a larger scale, although because I have little need for the feature at this moment in time, this will likely wait for a sponsor to show up for this to be implemented.

Still, stressant has useful features like the capability of sending logs by email using a fresh new implementation of the Python SMTPHandler (BufferedSMTPHandler) which waits for logging to complete before sending a single email. Another interesting piece of code in there is the NegateAction argparse handler that enables the use of "toggle flags" (e.g. --flag / --no-flag). I'm so happy with the code that I figure I could just share it here directly:

class NegateAction(argparse.Action):
    '''add a toggle flag to argparse

    this is similar to 'store_true' or 'store_false', but allows
    arguments prefixed with --no to disable the default. the default
    is set depending on the first argument - if it starts with the
    negative form (define by default as '--no'), the default is False,
    otherwise True.
    '''

    negative = '--no'

    def __init__(self, option_strings, *args, **kwargs):
        '''set default depending on the first argument'''
        default = not option_strings[0].startswith(self.negative)
        super(NegateAction, self).__init__(option_strings, *args,
                                           default=default, nargs=0, **kwargs)

    def __call__(self, parser, ns, values, option):
        '''set the truth value depending on whether
        it starts with the negative form'''
        setattr(ns, self.dest, not option.startswith(self.negative))

Short and sweet. I wonder why stuff like this is not in the standard library yet - maybe just because no one bothered yet? It'd be great to get feedback of more experienced Pythonistas on this one.

I hope that my work on Stressant is complete. I get zero funding for this work, and have little use for it myself: I manage only a few machines and such a tool really shines when you regularly put new hardware online, which is (fortunately?) not my case anymore. I'd be happy, of course, to accompany organisations and people that wish to further develop and use such a tool.

A short demo of stressant as well as detailed description of how it works is of course available in its README file.

Standard third party repositories

After looking at improvements for the grml repository instructions, I realized there was no real "best practices" document on how to configure an Apt repository. Sure, there are tools like reprepro and others, but those hardly qualify as policy: they are very flexible and there are lots of ways to create insecure repositories or curl | sh style instructions, which we of course generally want to avoid.

While the larger problem of Unstrusted Debian packages remain generally unsolved (e.g. when you install any .deb file, it can get root on your system), it seemed to me one critical part of this problem was how to add a random third-party repository to your machine while limiting, as much as possible, what possible attackers could do with such a repository. In other words, to solve the more general problem of insecure .deb files, we also need to solve the distribution problem, otherwise fixing the .deb files themselves will be useless.

This lead to the creation of standardized repository instructions that define:

  1. how to distribute the repository's public signing key (ie. over HTTPS)
  2. how to name suites and components (e.g. use stable and main unless you have a good reason, and explain yourself)
  3. recommend a healthy does of apt preferences pinning
  4. how to distribute keys (e.g. with a derive-archive-keyring package)

I've seen so many third party repositories get this wrong. For example, a lot of repositories recommend this type of command to intialize the OpenPGP trust path:

curl http://example.com/key.asc | apt-key add -

This has the following problems:

  • the key is transfered in plaintext and can easily be manipulated by an active attacker (e.g. a router on your path to the server or a neighbor in a Wifi cafe)
  • the key is added to the main trust root, which allows the key to authentify as the real Debian archive, therefore giving it all rights over all packages
  • since it's part of the global archive, it's difficult for a package to remove/add the key when a key rollover is necessary (and repositories generally don't provide a deriv-archive-keyring to do that process anyways)

An example of this are the Docker install instructions that, at least, manage to do this over HTTPS. Some other repositories don't even bother teaching people about the proper way of adding those keys. We settled for:

wget -O /usr/share/keyrings/deriv-archive-keyring.gpg https://deriv.example.net/debian/deriv-archive-keyring.gpg

That location was explicitly chosen to be out of the main trust directory, so that it needs to be explicitly added to the sources.list as well:

deb [signed-by=/usr/share/keyrings/deriv-archive-keyring.gpg] https://deriv.example.net/debian/ stable main

Similarly, we highly recommend users setup "apt pinning" to restrict what a given repository can do. Since pinning is so confusing, most people don't actually bother even configuring it and I have yet to see a single repo advise its users to configure those preferences, which are essential to limit what a repository can do. To keep configuration simple, we recommend this:

Package: *
Pin: origin deriv.example.net
Pin-Priority: 100

Obviously, for a single-package repository, the actual package name should be listed, e.g.:

Package: foo
Pin: origin deriv.example.net
Pin-Priority: 100

And the priority should probably be set to 1 unless you want to allow automatic upgrades.

It is my hope that this design will get more traction in the years to come and become a de-facto standard that will be a key part in safely adding third party repositories. There is obviously much more work to be done to improve security when installing untrusted .deb files, and I encourage Debian developers to consider contributing to the UntrustedDebs discussions and particularly to the Teams/Dpkg/Spec/DeclarativePackaging work.

Signal R&D

I spent a significant amount of time this month struggling with the Signal project on my phone. I'm still ambivalent on Signal: it's a centralized designed, too dependent on phone numbers, but I must admit they get a lot of things right and it's the only free-software platform that allows for easy-to-use, multi-platform videoconferencing that my family can use.

I've been following Signal for a while: up until now, I had been using the LibreSignal rebuild of the official client, as it is distributed on a F-Droid repository. Because I try to avoid Google (proprietary) software on my phone, it's basically the only way I could even install Signal. Unfortunately, the repository is out of date and introduces another point of trust in the distribution model: now you not only need to trust the Signal authors to do the right thing, you also need to trust that F-Droid repo not to inject nasty code on your phone. I've therefore started a discussion about how Signal could be distributed outside of the Google Play Store. I'd like to think it's one of the things that led the Signal people to distribute an official copy of Signal outside of the playstore.

After much struggling, I was able to upgrade to this official client (not before reinstalling and registering, which unfortunately changed my secret keys) and will be able to upgrade easily by just downloading the APK. I do hope Signal enters F-Droid one day, but it could take a while because it still doesn't work without Google services and barely works with MicroG, the free software alternative to the Google services clients. Moxie also set a list of requirements like crash reporting and statistics that need to be implemented on F-Droid's side before he agrees to the deployment, so this could take a while.

I've also participated in the, ahem, discussion on the JWZ blog regarding a supposed vulnerability in Signal where it would leak previously unknown phone numbers to third parties. I reviewed the way the phone number is uploaded and, while it's possible to create a rainbow table of phone numbers (which are hashed with a truncated SHA-1 checksum), I couldn't verify the claims of other participants in the thread. For me, Signal still does the right thing with contacts, although I do question the way "read status" notifications get transmitted, but that belong in another bug report / blog post.

Debian Long Term Support (LTS)

It's been more than a year working on Debian LTS, started by Raphael Hertzog at Freexian. I didn't work much in February so I had a lot of hours to catchup with, and was unfortunately unable to do so, partly because I was busy with other projects, and partly because my colleagues are doing a great job at resolving the most important issues.

So one my concerns this month was finding work. It seemed that all the hard packages were either taken (e.g. my usual favorites, tiff and imagemagick, we done by others) or just too challenging (e.g. I don't feel quite comfortable tackling the LTS branch of the Linux kernel yet).

I spent quite a bit of time trying to figure out what was wrong with pcre3, only to realise the "32" in the report was not about the architecture, but about the character width. Because of thise, I marked 4 CVEs (CVE-2017-7186, CVE-2017-7244, CVE-2017-7245, CVE-2017-7246) as "not-affected", since the 32-bith character support wasn't enabled in wheezy (or jessie, for that matter). I still spent some time trying to reproduce the issues, which require a compiler with an AddressSanitizer, something that was introduced in both Clang and GCC after Wheezy was released, which makes reproducing this fairly complicated...

This allowed me to experiment more with Vagrant, however, and I have provided the Debian cloud team with a 32-bit Vagrant box that was merged in shortly after, although it doesn't show up yet in the official list of Debian images.

Then I looked at the apparmor situation (CVE-2017-6507), Debian bug #858768). That was one tricky bug as well, since it's not a security issue in apparmor per se, but more an issue with things that assume a certain behavior from apparmor. I have concluded that Wheezy was not affected because there are no assumptions of proper isolation there - which are provided only starting from LXC 1.0 - and Docker is not in Wheezy. I also couldn't reproduce the issue on Jessie, but, as it turns out, the issue was sysvinit-specific, which is why I couldn't reproduce it under the default systemd configuration shipped with Jessie.

I also looked at the various binutils security issues: as I reported on the mailing list, I didn't see anything serious enough in there to warrant a a security release and followed the lead of both the stable and Red Hat security teams by marking this "no-dsa". I similiarly reviewed the mp3splt security issues (specifically CVE-2017-5666) and was fairly puzzled by that issue, which seems to be triggered only the same [[!wikipedia AddressSanitizer desc="address sanitization]] extensions than PCRE, although there was some pretty wild interplay with debugging flags in there. All in all, it seems we can't reproduce that issue in wheezy, but I do not feel confident enough in the results to push that issue aside for now.

I finally uploaded the pending graphicsmagick issue (DLA-547-2), a regression update to fix a crash that was introduced in the previous release (DLA-547-1, mistakenly named DLA-574-1). Hopefully that release should clear up some of the confusion and fix the regression.

I also released DLA-879-1 for the CVE-2017-6369 in firebird2.5 which was an interesting experiment: I couldn't reproduce the issue in a local VM. After following the Ubuntu setup tutorial, as I wasn't too familiar with the Firebird database until now (hint: the default username and password is sysdba/masterkey), I ended up assuming we were vulnerable and just backporting the patch after seeing the jessie folks push out a release just in case.

I also looked at updating the ca-certificates package to deal with the pending WoSign/Startcom removal: I made an explicit list of the CAs that need to be removed after reviewing the Mozilla list. I also sent a patch for an unrelated issue where ca-certificates is writing to /usr/local (!!) in Debian bug #843722.

I have also done some "meta" work in starting a discussion about fixing the missing DLA links in the tracker, as you will notice all of the above links lead to nowhere. Thanks to pabs, there are now some links but unfortunately there are about 500 DLAs missing from the website. We also discussed ways to Debian bug #859123, something which is currently a manual process. This is now in the hands of the excellent webmaster team.

I have also filed a few missing security bugs (Debian bug #859135, Debian bug #859136), partly because I wanted to help the security team. But it turned out that I felt the script needed some improvements, so I submitted a patch to improve the script so it is easier to run.

Other projects

As usual, there's the usual mixed bags of chaos:

More stuff on Github...

Enrico Zini: Stereo remote control

2 April, 2017 - 05:10

Wouldn't it be nice if I could use the hifi remote control to control mpd?

It turns out many wishes can come true when one has a GPIO board.

A friend of mine had a pile of IR receiver components in his stash and gave me one. It is labeled "38A 1424A", and the closest matching datasheet I found is this one.

I wired the receiver with the control pin on GPIO port 24, and set up lirc by following roughly this guide.

Enable lirc_rpi support

I had to add these lines to /boot/config.txt to enable lirc_rpi support:

dtoverlay=lirc-rpi,gpio_in_pin=24,gpio_out_pin=22
dtparam=gpio_in_pull=up

At first I had missed configuration of the internal pull up resistor, and reception worked but was very, very poor.

Then reboot.

Install and configure lirc
apt install lirc

I added these lines to /etc/lirc/hardware.conf:

DRIVER="default"
DEVICE="/dev/lirc0"
MODULES="lirc_rpi"

Stopped lircd:

systemctl stop lirc

Tested that the receiver worked:

mode2 -d /dev/lirc0

Downloaded remote control codes for my hifi and put them in /etc/lirc/lircd.conf.

Started lircd

systemctl start lirc

Tested that lirc could parse commands from my remote control:

$ irw
0000400405506035 00 CD_PAUSE RAK-SC304W
0000400405506035 01 CD_PAUSE RAK-SC304W
0000400405506035 02 CD_PAUSE RAK-SC304W
0000400405505005 00 CD_PLAY RAK-SC304W
0000400405505005 01 CD_PLAY RAK-SC304W
Interface lirc with mpd

I made this simple lirc program and saved it in ~pi/.lircrc:

begin
     prog = irexec
     button = CD_NEXT
     config = mpc next
end

begin
     prog = irexec
     button = TAPE_FWD
     config = mpc next
end

begin
     prog = irexec
     button = TAPE_REW
     config = mpc prev
end

begin
     prog = irexec
     button = CD_PREV
     config = mpc prev
end

begin
     prog = irexec
     button = TAPE_PAUSE
     config = mpc pause
end

begin
     prog = irexec
     button = CD_PAUSE
     config = mpc pause
end

begin
     prog = irexec
     button = CD_PLAY
     config = mpc toggle
end

begin
     prog = debug
     button = TAPE_PLAY_RIGHT
     config = mpc toggle
end

Then wrote a systemd unit file to start irexec and saved it as /etc/systemd/system/mpd-irexec.service:

[Unit]
Description=Control mpd via lirc remote control
After=lirc mpd

[Service]
Type=simple
ExecStart=/usr/bin/irexec
Restart=always
User=pi
WorkingDirectory=~

[Install]
WantedBy=multi-user.target

Then systemctl start mpd-irexec to start irexec, and systemctl enable mpd-irexec to start irexec at boot.

Profit!

All of this was done by me, with almost no electronics training, following online tutorials for the hardware parts.

To connect components I used a breadboard and female-male jumper leads, so I didn't have to solder, for which I have very little practice.

Now the Raspberry Pi is so much a part of my hifi component that I can even control it with the hifi remote control.

Given that I disconnected the CD and tape players, there are now at least 16 free buttons on the remote control that I can script however I like.

Thorsten Alteholz: My Debian Activities in March 2017

2 April, 2017 - 04:18

FTP assistant

This month I marked 111 packages for accept and sent four emails to maintainers asking questions. The bad number of the month are the 41 packages I had to reject. This rejection rate was the worst of all my NEW-months.

May I ask everybody to pay a bit more attention before uploading/sponsoring a package?

Debian LTS

This was my thirty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 14.75h. During that time I did uploads of

  • [DSA 3798-1] tnef security update for four CVEs
  • [DLA 839-2] tnef regression update
  • [DSA 3798-2] tnef regression update
  • tnef security update in unstable/testing for four CVEs
  • [DLA 878-1] libytnef security update for ten CVEs

I also took care of radare and marked all CVEs as not-affected in Wheezy. My next package on the list will be qbittorrent.

Other stuff

I uploaded a new version of entropybroker to fix a bug with the handling of return codes of ppoll. This version will also make it to Stretch. The same happens with a bug in alljoyn-services-1509. I don’t know why everybody talks about unblock-bugs that need to be filed!? The release team was always faster in granting the unblock than me in filing the corresponding bug.

As my DOPOM for this month I adopted httperf, took care of some bugs and sent patches upstream.

I also created a new project on Alioth that is called debian-mobcom (Alioth page), which shall be a place for all packages concerning mobile communication on the network part. I only uploaded libosmocore to experimental yet, so the package list is rather short.

Bits from Debian: Unknown parallel universe uses Debian

1 April, 2017 - 20:30

The space agencies running the International Space Station (ISS) reported that a laptop accidentally threw to space as waste in 2013 from the International State Station may have connected with a parallel Universe. This laptop was running Debian 6 and the ISS engineers managed to track its travel through the outer space. In early January, the laptop signal was lost but recovered back two weeks later in the same place. ISS engineers suspect that the laptop may had met and crossed a wormhole arriving a parallel Universe from where "somebody" sent it back later.

Eventually the laptop was recovered and in an first analysis the ISS engineers found that the laptop have a dual boot: a partition running the Debian installation made by them and a second partition running what seems to be a Debian fork or derivative totally unknown until now.

The engineers have been in contact with the Debian Project in the last weeks and a Debian group formed with delegates from different Debian teams have begun to study this new Debian derivative system. From the early results of this research, we can proudly say that somebody (or a group of beings) in a parallel universe understand Earth computers, and Debian, enough to:

  • Clone the existing Debian system in a new partition and provide a dual boot using Grub.
  • Change the desktop wallpaper from the previous Spacefun theme to one in rainbow colors.
  • Fork all the packages whose source code was present in the initial Debian system, patch multiple bugs in those packages and some patches more for some tricky security problems.
  • Add ten new language locales that do not correspond to any language spoken in Earth, with full translation for four of them.
  • A copy of the Debian website repository, migrated to the git version control system and perfectly running, has been found in the /home/earth0/Documents folder. This new repo includes code to show the Debian micronews in the home page and many other improvements, keeping the style of not needing JavaScript and providing a nice control of up-to-date/outdated translations, similar to the one existing in Debian.

The work towards knowing better this new Universe and find a way to communicate with them has just began; all the Debian users and contributors are invited to join the effort to study the operating system found. We want to prepare our Community and our Universe to live and work peacefully and respectfully with the parallel Universe communities, in the true spirit of Free Software.

In the following weeks a General Resolution will be proposed for updating our motto to "the multiversal operating system".

David Kalnischkies: Winning the Google Open Source Lottery

1 April, 2017 - 16:52

I don't know about you, but I frequently get mails announcing that I was picked as the lucky winner of a lottery, compensation program or simply as "business associate". Obvious Spam of course, that never happens in reality. Just like my personal "favorite" at the moment: Mails notifying me of inheritance from a previously (more or less) unknown relative. Its just that this is what has happend basically a few weeks ago in reality to me (over the phone through) – and I am still dealing with the bureaucracy required of teaching the legal system that I have absolutely no intention of becoming the legal successor of someone I haven't seen in 20 years, regardless of how close the family relation is on paper… but that might be the topic of another day.

On the 1st March a mail titled "Google Open Source Peer Bonus Program" looked at first as if it would fall into this lottery spam class. It didn't exactly help that the mail was multipart HTML and text, but the text really only the text, not mentioning the embedded links used in the HTML part. It even included a prominent and obvious red flag: "Please fill out the form". 20% Bayes score didn't come from nothing. Still, for better or worse the words "Open Source" made it unlikely to be spam similar to how the word PGP indicates authenticity. So it happened, another spam message became true for me. I wonder which one will be next…

You have probably figured out by now that I didn't know that program before. Kinda embarrassing for a previous Google Summer of Code student (GSoC is run by the same office), but the idea behind it is simple: Google employees can nominate contributors to open source stuff for a small monetary "thank you!" gift card. Earlier this week winners for this round were announced – 52 contributors including yours truly. You might be surprised, but the rational given behind my name is APT (I got a private mail with the full rational from my "patron", just in case you wonder if at least I would know more).

It is funny how a guy who was taken aback by the prospect of needing a package manager like YaST to use Linux contributed just months later the first patch to apt and has roughly 8 years later amassed more than 2400 commits. It's birthday season in my family with e.g. mine just a few days ago, so its seems natural that apt has its own birthday today just as if it would be part of my family: 19th years this little bundle of bugs joy is now! In more sober moments I wonder sometimes how apt and I would have turned out if we hadn't meet. Would apt have met someone else? Would I? Given that I am still the newest team member and only recently joined Debian as DD at all…

APT has some strange ways of showing that it loves you: It e.g. helps users compose mails which end in a dilemma to give a recent example. Perhaps you need to be a special kind of crazy1 to consider this good, but as I see it apt has a big enough userbase that regardless of what your patch is doing, someone will like it. That drastically increases the chances that someone will also like it enough to say so in public – offsetting complains from all those who don't like the (effects of the) patch which are omnipresent. And twice in a blue moon some of those will even step forward and thank you explicitly. Not that it would be necessary, but it is nice anyhow. So, thanks for the love supercow, Google & apt users! 🙂

Or in other words: APT might very well be one of the most friendly (package manager related) project to contribute to as the language-specific managers have smaller userbases and hence a smaller chance of having someone liking your work (in public)… so contribute a patch or two and be loved, too! 💖

Disclaimer: I get no bonus for posting this nor are any other strings attached. Birthdays are just a good time to reflect. In terms of what I do with my new found riches (in case I really receive them – I haven't yet so that could still be an elaborate scam…): APT is a very humble program, but even it is thinking about moving away from a dev-box with less than 4 GB of RAM and no SSD, so it is happily accepting the gift and expects me to upgrade sooner now. What kind of precedence this sets for the two decades milestone next year? If APT isn't obsolete by then… We will see.

  1. which even ended up topping Hacker News around New Year's Eve… who would have thought that apt and reproducibility bugs are top news ;) 

Mike Hommey: Progress on git-cinnabar memory usage

1 April, 2017 - 16:45

This all started when I figured out that git-cinnabar was using crazy amounts of memory when cloning mozilla-central. That pointed to memory allocation patterns that triggered a suboptimal behavior in the glibc memory allocator, and, while overall, git-cinnabar wasn’t really abusing memory all things considered, it happened to be realloc()ating way too much.

It also turned out that recent changes on the master branch had made most uses of fast-import synchronous, making the whole process significantly slower.

This is where we started from on 0.4.0:

And on the master branch as of be75326:

An interesting thing to note here is that the glibc allocator runaway memory use was, this time, more pronounced on 0.4.0 than on master. It was the opposite originally, but as I mentioned in the past ASLR is making it not happen exactly the same way each time.

While I’m here, one thing I failed to mention in the previous posts is that all these measurements were done by cloning a local mercurial clone of mozilla-central, served from localhost via HTTP to eliminate the download time from hg.mozilla.org. And while mozilla-central itself has received new changesets since the first post, the local clone has not been updated, such that all subsequent clone tests I did were cloning the exact same repository under the exact same circumstances.

After last blog post, I focused on the low hanging fruits identified so far:

  • Moving the mercurial to git SHA1 mapping to the helper process (Finding a git bug in the process).
  • Tracking mercurial manifest heads in the helper process.
  • Removing most of the synchronous calls to the helper happening during a clone.

And this is how things now look on the master branch as of 35c18e7:

So where does that put us?

  • The overall clone is now about 11 minutes faster than 0.4.0 (and about 50 minutes faster than master as of be75326!)
  • Non-shared memory use of the git-remote-hg process stays well under 2GB during the whole clone, with no spike at the end.
  • git-cinnabar-helper now uses more memory, but the sum of both processes is less than what it used to be, even when compensating for the glibc memory allocator issue. One thing to note is that while the git-cinnabar-helper memory use goes above 2GB at the end of the clone, a very large part is due to the pack window size being 1GB on 64-bits (vs. 32MB on 32-bits). Memory usage should stay well under the 2GB address space limit on a 32-bits system.
  • CPU usage is well above 100% for most of the clone.

On a more granular level:

  • The “Import manifests” phase is now 13 minutes faster than it was in 0.4.0.
  • The “Read and import files” phase is still almost 4 minutes slower than in 0.4.0.
  • The “Import changesets” phase is still almost 2 minutes slower than in 0.4.0.
  • But the “Finalization” phase is now 3 minutes faster than in 0.4.0.

What this means is that there’s still room for improvement. But at this point, I’d rather focus on other things.

Logging all the memory allocations with the python allocator disabled still resulted in a 6.5GB compressed log file, containing 2.6 billion calls to malloc, calloc, free and realloc (down from 2.7 billions in be75326). The number of allocator calls done by the git-remote-hg process is down to 2.25 billions (from 2.34 billion in be75326).

Surprisingly, while more things were moved to the helper, it still made less allocations than in be75326: 345 millions, down from 363 millions. Presumably, this is because the number of commands processed by the fast-import code was reduced.

Let’s now take a look at the various metrics we analyzed previously (the horizontal axis represents the number of allocator calls that happened before the measurement):

A few observations to make here:

  • The allocated memory (requested bytes) is well below what it was, and the spike at the end is entirely gone. It also more closely follows the amount of raw data we’re holding on to (which makes sense since most of the bookkeeping was moved to the helper)
  • The number of live allocations (allocated memory pointers that haven’t been free()d yet) has gone significantly down as well.
  • The cumulated[*] bytes are now in a much more reasonable range, with the lower bound close to the total amount of data we’re dealing with during the clone, and the upper bound slightly over twice that amount (the upper bound for the be75326 is not shown here, but it was around 45TB; less than 7TB is a big improvement).
  • There are less allocator calls during the first phases and the “Importing changesets” phase, but more during the “Reading and importing files” and “Importing manifests” phases.

[*] The upper bound is the sum of all sizes ever given to malloc, calloc, realloc etc. and the lower bound is the same, but removing the size of allocations passed as input to realloc (in practical words, this pretends reallocs never happened and that the final size for a given reallocated pointer is the one that counts)

So presumably, some of the changes led to more short-lived allocations. Considering python uses its own allocator for sizes smaller than 512 bytes, it’s probably not so much of a problem. But let’s look at the distribution of buffer sizes (including all sizes given to realloc).

(Bucket size is 16 bytes)

What is not completely obvious from the logarithmic scale is that, in fact, 98.4% of the allocations are less than 512 bytes with the current master (35c18e7), and they were 95.5% with be75326. Interestingly, though, in absolute numbers, there are less allocations smaller than 512 bytes in current master than in be75326 (1,194,268,071 vs 1,214,784,494). This suggests the extra allocations that happen during some phases are larger than that.

There are clearly less allocations across the board (apart from very few exceptions), and close to an order of magnitude less allocations larger than 1MiB. In fact, widening the bucket size to 32KiB shows an order of magnitude difference (or close) for most buckets:

An interesting thing to note is how some sizes are largely overrepresented in the data with buckets of 16 bytes, like 768, 1104, 2048, 4128, with other smaller bumps for e.g. 2144, 2464, 2832, 3232, 3696, 4208, 4786, 5424, 6144, 6992, 7920… While some of those are powers of 2, most aren’t, and some of them may actually represent objects sized with a power of 2, but that have an extra PyObject overhead.

While looking at allocation stats, I got to wonder what the lifetimes of those allocations looked like. So I scanned the allocator logs and measured the distance between when an allocation is made and when it is freed, ignoring reallocs.

To give a few examples of what I mean, the following allocation for p gets a lifetime of 0:

void *p = malloc(42);
free(p);

The following a lifetime of 1:

void *p = malloc(42);
void *other = malloc(42);
free(p);

And the following a lifetime of 1 as well:

void *p = malloc(42);
p = realloc(p, 84);
free(p);

(that is, it is not counted as two malloc/free pairs)

The further away the free is from the corresponding malloc, the larger the lifetime. And the largest the lifetime can ever be is the total number of allocator function calls minus two, in the hypothetical case the very first allocation is freed as the very last (minus two because we defined the lifetime as the distance).

What comes out of this data:

  • As expected, there are more short-lived allocations in 35c18e7.
  • Around 90% of allocations have a lifetime spanning 10% of the process life or less. This is a rather surprisingly large amount of allocations with a very large lifetime.
  • Around 80% of allocations have a lifetime spanning 0.01% of the process life or less.
  • The median lifetime is around 0.0000002% (2*10-7%) of the process life, which, in absolute terms is around 500 allocator function calls between a malloc and a free.
  • If we consider every imported changeset, manifest and file to require a similar number of allocations, and considering there are about 2.7M of them in total, each spans about 3.7*10-7%. About 53% of all allocations on be75326 and 57% on 35c18e7 have a lifetime below that. Whenever I get to look more closely to memory usage again, I’ll probably look at the data separately for each individual phase.
  • One surprising fact, that doesn’t appear on the graph because of the logarithmic scale not showing “0” on the horizontal axis, is that 9.3% on be75326 and 7.3% on 35c18e7 of all allocations have a lifetime of 0. That is, whatever the code using them is doing, it’s not allocating or freeing anything else, and not reallocating them either.

All in all, what the data shows is that we’re definitely in a better place now than we used to be a few days ago, and that there is still work to do on the memory front, but:

  • As mentioned in a previous post, there are bigger wins to be had from not keeping manifests data around in memory at all, and by importing it directly instead.
  • In time, a lot of the import code is meant to move to the helper, where the constraints are completely different, and it might not be worth spending time now on reducing the memory usage of python code that might go away soon(ish). The situation was bad and necessitated action rather quickly, but we’re now in a place where it’s not as bad anymore.

So at this point, I won’t look any deeper into the memory usage of the git-remote-hg python process, and will instead focus on the planned metadata storage changes. They will allow to share the metadata more easily (allowing faster and more straightforward gecko-dev graft), and will allow to import manifests earlier, which, as mentioned already, will help reduce memory use, but, more importantly, will allow to do more actual work while downloading the data. On slow networks, this is crucial to make clones and pulls faster.

Francois Marier: Manually expanding a RAID1 array on Ubuntu

1 April, 2017 - 13:00

Here are the notes I took while manually expanding an non-LVM encrypted RAID1 array on an Ubuntu machine.

My original setup consisted of a 1 TB drive along with a 2 TB drive, which meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3 TB drive).

Partition the new drive

In order to partition the new 3 TB drive, I started by creating a temporary partition on the old 2 TB drive (/dev/sdc) to use up all of the capacity on that drive:

$ parted /dev/sdc
unit s
print
mkpart
print

Then I initialized the partition table and creating the EFI partition partition on the new drive (/dev/sdd):

$ parted /dev/sdd
unit s
mktable gpt
mkpart

Since I want to have the RAID1 array be as large as the smaller of the two drives, I made sure that the second partition (/home) on the new 3 TB drive had:

  • the same start position as the second partition on the old drive
  • the end position of the third partition (the temporary one I just created) on the old drive

I created the partition and flagged it as a RAID one:

mkpart
toggle 2 raid

and then deleted the temporary partition on the old 2 TB drive:

$ parted /dev/sdc
print
rm 3
print
Create a temporary RAID1 array on the new drive

With the new drive properly partitioned, I created a new RAID array for it:

mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing

and added it to /etc/mdadm/mdadm.conf:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

which required manual editing of that file to remove duplicate entries.

Create the encrypted partition

With the new RAID device in place, I created the encrypted LUKS partition:

cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10
cryptsetup luksOpen /dev/md10 chome2

I took the UUID for the temporary RAID partition:

blkid /dev/md10

and put it in /etc/crypttab as chome2.

Then, I formatted the new LUKS partition and mounted it:

mkfs.ext4 -m 0 /dev/mapper/chome2
mkdir /home2
mount /dev/mapper/chome2 /home2
Copy the data from the old drive

With the home paritions of both drives mounted, I copied the files over to the new drive:

eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/

making use of wrappers that preserve system reponsiveness during I/O-intensive operations.

Switch over to the new drive

After the copy, I switched over to the new drive in a step-by-step way:

  1. Changed the UUID of chome in /etc/crypttab.
  2. Changed the UUID and name of /dev/md1 in /etc/mdadm/mdadm.conf.
  3. Rebooted with both drives.
  4. Checked that the new drive was the one used in the encrypted /home mount using: df -h.
Add the old drive to the new RAID array

With all of this working, it was time to clear the mdadm superblock from the old drive:

mdadm --zero-superblock /dev/sdc1

and then change the second partition of the old drive to make it the same size as the one on the new drive:

$ parted /dev/sdc
rm 2
mkpart
toggle 2 raid
print

before adding it to the new array:

mdadm /dev/md1 -a /dev/sdc1
Rename the new array

To change the name of the new RAID array back to what it was on the old drive, I first had to stop both the old and the new RAID arrays:

umount /home
cryptsetup luksClose chome
mdadm --stop /dev/md10
mdadm --stop /dev/md1

before running this command:

mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2

and updating the name in /etc/mdadm/mdadm.conf.

The last step was to regenerate the initramfs:

update-initramfs -u

before rebooting into something that looks exactly like the original RAID1 array but with twice the size.

Russ Allbery: Review: Two Serpents Rise

1 April, 2017 - 11:39

Review: Two Serpents Rise, by Max Gladstone

Series: Craft #2 Publisher: Tor Copyright: October 2013 ISBN: 1-4668-0204-9 Format: Mobi Pages: 350

This is the second book in the Craft Sequence, coming after Three Parts Dead, but it's not a sequel. The only thing shared between the books is the same universe and magical system. Events in Two Serpents Rise were sufficiently distant from the events of the first book that it wasn't obvious (nor did it matter) where it fit chronologically.

Caleb is a gambler and an investigator for Red King Consolidated, the vast firm that controls the water supply, and everything else, in the desert city of Dresediel Lex. He has a fairly steady and comfortable job in a city that's not comfortable for many, one of sharp divisions between rich and poor and which is constantly one water disturbance away from riot. His corporate work life frustrates his notorious father, a legendary priest of the old gods who were defeated by the Red King and who continues to fight an ongoing terrorist resistance to the new corporate order. But Caleb has as little as possible to do with that.

Two Serpents Rise opens with an infiltration of the Bright Mirror Reservoir, one of the key components of Dresediel Lex's water supply. It's been infested with Tzimet: demon-like creatures that, were they to get into the city's water supply, would flow from faucets and feed on humans. Red King Incorporated discovered this one and sealed the reservoir before the worst could happen, but it's an unsettling attack. And while Caleb is attempting to determine what happened, he has an unexpected encounter with a cliff runner: a daredevil parkour enthusiast with an unexpected amulet of old Craft that would keep her invisible from most without the magical legacy Caleb is blessed (or cursed) with. He doesn't think her presence is related to the attack, but he can't be sure, particularly with the muddling fact that he finds her personally fascinating.

Like Three Parts Dead, you could call Two Serpents Rise an urban fantasy in that it's a fantasy that largely takes place in cities and is concerned with such things as infrastructure, politics, and the machinery of civilization. However, unlike Three Parts Dead, it takes itself much more seriously and has less of the banter and delightful absurdity of the previous book. The identification of magic with contracts and legalities is less amusingly creative here and more darkly sinister. Partly this is because the past of Dresediel Lex is full of bloodthirsty gods and human sacrifice, and while Red King Consolidated has put an end to that practice, it lurks beneath the surface and is constantly brought to mind by some grisly artifacts.

I seem to always struggle with fantasy novels based loosely on central American mythology. An emphasis on sacrifice and terror always seems to emerge from that background, and it verges too close to horror for me. It also seems prone to clashes of divine power and whim instead of thoughtful human analysis. That's certainly the case here: instead of Tara's creative sleuthing and analysis, Caleb's story is more about uncertainty, obsession, gambling, and shattering revelations. Magical rituals are described more in terms of their emotional impact than their world-building magical theory. I think this is mostly a matter of taste, and it's possible others would like Two Serpents Rise better than the previous book, but it wasn't as much my thing.

The characters are a mixed bag. Caleb was a bit too passive to me, blown about by his father and his employer and slow to make concrete decisions. Mal was the highlight of the book for me, but I felt at odds with the author over that, which made the end of the book somewhat frustrating. Caleb has some interesting friends, but this is one of those books where I would have preferred one of the supporting cast to be the protagonist.

That said, it's not a bad book. There are some very impressive set pieces, the supporting cast is quite good, and I am wholeheartedly in favor of fantasy novels that are built around the difficulties of water supply to a large, arid city. This sort of thing has far more to do with human life than the never-ending magical wars over world domination that most fantasy novels focus on, and it's not at all boring when told properly. Gladstone is a good writer, and despite the focus of this book not being as much my cup of tea, I'll keep reading this series.

Followed by Full Fathom Five.

Rating: 7 out of 10

Paul Wise: FLOSS Activities March 2017

1 April, 2017 - 08:01
Changes Issues Review Administration
  • Debian systems: apply a patch to userdir-ldap, ask a local admin to reset a dead powerpc buildd, remove dead SH4 porterboxen from LDAP, fix perms on www.d.o OC static mirror, report false positives in an an automated abuse report, redirect 1 student to FAQs/support/DebianEdu, redirect 1 event organiser to partners/trademark/merchandise/DPL, redirect 1 guest account seeker to NM, redirect 1 @debian.org desirer to NM, redirect 1 email bounce to a changes@db.d.o user, redirect 2 people to the listmasters, redirect 1 person to Debian Pure Blends, redirect 1 user to a service admin and redirect 2 users to support
  • Debian packages site: deploy my ports/cruft changes
  • Debian wiki: poke at HP page history and advise a contributor, whitelist 13 email address, whitelist 1 domain, check out history of a banned IP, direct 1 hoster to DebConf17 sponsors team, direct 1 user to OpenStack packaging, direct 1 user to InstallingDebianOn and h-node.org, direct 2 users to different ways to help Debian and direct 1 emeritus DD on repository wiki page reorganisation
  • Debian QA: fix an issue with the PTS news, remove some debugging cruft I left behind, fix the usertags on a QA bug and deploy some code fixes
  • Debian mentors: security upgrades and service restarts
  • Openmoko: security upgrades and reboots
Communication Sponsors

The valgrind backport, samba and libthrift-perl bug reports were sponsored by my employer. All other work was done on a volunteer basis.

Keith Packard: DRM-lease-2

1 April, 2017 - 07:25
DRM leasing part deux (kernel side)

I've stabilized the kernel lease implementation so that leases now work reliably, and don't require a reboot before running the demo app a second time. Here's whats changed:

  1. Reference counting is hard. I'm creating a new drm_master, which means creating a new file. There were a bunch of reference counting errors in my first pass; I've cleaned those up while also reducing the changes needed to the rest of the DRM code.

  2. Added a 'mask_lease' value to the leases -- this controls whether resources in the lease are hidden from the lessor, allowing the lessor to continue to do operations on the leased resources if it likes.

  3. Hacked the mutex locking assertions to not crash in normal circumstances. I'm now doing:

    BUG_ON(__mutex_owner(&master->dev->mode_config.idr_mutex) != current);

    to make sure the mutex is held by the current thread, instead of just making sure some thread holds the mutex. I have this memory of a better way to do this, but now I can't dig it up. Suggestions welcome, of course.

I'm reasonably pleased with the current state of the code, although I want to squash the patches together so that only the final state of the design is represented, rather than the series of hacks present now.

Comments on #dri-devel

I spent a bit of time on the #dri-devel IRC channel answering questions about the DRM-lease design and the changes above reflect that.

One concern was about mode setting from two masters at the same time. Mode setting depends on a number of shared 'hidden' resources; things like memory fifos and the like. If either lessee or lessor wants to change the displayed modes, they may fail due to conflicts over these resources and not be able to recover easily.

A related concern was that the current TEST/render/commit mechanism used by compositors may no longer be reliable as another master could change hidden resource usage between the TEST and commit operations. Daniel Vetter suggested allowing the lessor to 'exclude' lessee mode sets during this operation.

One solution would be to have the lessor set the mode on the leased resources before ceding control of the objects, and once set, the lessee shouldn't perform additional mode setting operations. This would limit the flexibility in the lessee quite a bit as it wouldn't be able to pop up overlay planes or change resolution.

Questions

Let's review the questions from my last post, DRM-lease:

  • What should happen when a Lessor is closed? Should all access to controlled resources be revoked from all descendant Lessees?

    Answer: The lessor is referenced by the lessee and isn't destroyed until it has been closed and all lessees are destroyed.

  • How about when a Lessee is closed? Should the Lessor be notified in some way?

    Answer: I think so? Need to figure out a mechanism here.

  • CRTCs and Encoders have properties. Should these properties be automatically included in the lease?

    Answer -- no, userspace is responsible for constructing the entire lease.

Remaining Kernel Work

The code is running, and appears stable. However, it's not quite done yet. Here's a list of remaining items that I know about:

  1. Changing leases should update sub-leases. When you reduce the resources in one lease, the kernel should walk any sub-leases and clear out resources which the lessor no longer has access to.

  2. Sending events when leases are created/destroyed. When a lease is created, if the mask_lease value is set, then the lessor should get regular events describing the effective change. Similarly, both lessor and lessee should get events when a lease is changed.

  3. Refactoring the patch series to squash intermediate versions of the new code.

Remaining Other Work

Outside of the kernel, I'll be adding X support for this operation. Here's my current thinking:

  • Extend RandR to add a lease-creation request that returns a file descriptor. This will take a mode and the server will set that mode before returning.

  • Provide some EDID-based matching for HMD displays in the X server. The goal is to 'hide' these from the regular desktop so that the HMD screen doesn't flicker with desktop content before the lease is created. I think what I want is to let an X client provide some matching criteria and for it to get an event when a output is connected with matching EDID information.

With that, I think this phase of the project will be wrapped up and it will be time to move on to actually hooking up a real HMD and hacking the VR code to use this new stuff.

Seeing the code

For those interested in seeing the state of the code so far, there's kernel, drm and kmscube repositories here:

Gunnar Wolf: Cannot help but sharing a historic video

1 April, 2017 - 06:51

People that know me know that I do whatever I can in order to avoid watching videos online if there's any other way to get to the content. It may be that I'm too old-fashioned, or that I have low attention and prefer to use a media where I can quickly scroll up and down a paragraph, or that I feel the time between bits of content is just a useless transition or whatever...

But I bit. And I loved it.

A couple of days ago, OS News featured a post titled From the AT&T Archives: The UNIX Operating System. It links to a couple of videos in AT&T's Youtube channel.

I watched
AT&T Archives: The UNIX Operating System
, an amazing historic evidence: A 27 minute long documentary produced in 1981 covering... What is Unix. Why Unix is so unique, useful and friendly.

What's the big deal about it? That this document shows first-hand that we are not repeating myths we came up with along the way: The same principles of process composition, of simplicity and robustness, but spoken directly by many core actors of the era — Brian Kernighan (who drove a great deal of the technical explanation), Alfred Aho, Dennis Ritchie, Ken Thompson... And several more I didn't actually catch the names of.

Of course, the video includes casual shots of life at AT&T, including lots of terminals (even some of which are quite similar to the first ones I used here in Mexico, of course), then-amazing color animation videos showing the state of the art of computer video 35 years ago...

A delightful way to lose half an hour of productivity. And a bit of material that will surely find its way into my classes for some future semester :)

[ps] Yes, I don't watch videos in Youtube. I don't want to enable its dirty Javascript. So, of course, I use the great Youtube-dl tool. I cannot share the video file itself here due to Youtube's service terms, but Youtube-dl is legal and free.

Enrico Zini: Links for April 2017

1 April, 2017 - 06:00
A Survey of Propaganda
«…an excellent survey article on modern propaganda techniques, how they work, and how we might defend ourselves against them. … As to defense: "Debunking doesn't work: provide an alternative narrative."»

Junichi Uekawa: It's april.

1 April, 2017 - 05:35
It's april. I am still practicing my piano. I tried in-flight WiFi for the first time and it seems to be usable. Of course, most of my apps are Android apps or web apps which don't really care about slow latency in connections.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้