Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 47 min ago

Reproducible builds folks: Reproducible builds: week 48 in Stretch cycle

13 April, 2016 - 01:28

What happened in the reproducible builds effort between March 20th and March 26th:

Toolchain fixes
  • Sebastian Ramacher uploaded breathe/4.2.0-1 which makes its output deterministic. Original patch by Chris Lamb, merged upstream.
  • Rafael Laboissiere uploaded octave/4.0.1-1 which allows packages to be built in place and avoid unreproducible builds due to temporary build directories appearing in the .oct files.

Daniel Kahn Gillmor worked on removing build path from build symbols submitting a patch adding -fdebug-prefix-map to clang to match GCC, another patch against gcc-5 to backport the removal of -fdebug-prefix-map from DW_AT_producer, and finally by proposing the addition of a normalizedebugpath to the reproducible feature set of dpkg-buildflags that would use -fdebug-prefix-map to replace the current directory with “.” using -fdebug-prefix-map.

Sergey Poznyakoff merged the --clamp-mtime option so that it will be featured in the next Tar release. This option is likely to be used by dpkg-deb to implement deterministic mtimes for packaged files.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: augeas, gmtkbabel, ktikz, octave-control, octave-general, octave-image, octave-ltfat, octave-miscellaneous, octave-mpi, octave-nurbs, octave-octcdf, octave-sockets, octave-strings, openlayers, python-structlog, signond.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #818742 on milkytracker by Reiner Herrmann: sorts the list of source files.
  • #818752 on tcl8.4 by Reiner Herrmann: sort source files using C locale.
  • #818753 on tk8.6 by Reiner Herrmann: sort source files using C locale.
  • #818754 on tk8.5 by Reiner Herrmann: sort source files using C locale.
  • #818755 on tk8.4 by Reiner Herrmann: sort source files using C locale.
  • #818952 on marionnet by ceridwen: dummy out build date and uname to make build reproducible.
  • #819334 on avahi by Reiner Herrmann: ship upstream changelog instead of the one generated by gettextize (although duplicate of #804141 by Santiago Vila).
tests.reproducible-builds.org

i386 build nodes have been setup by converting 2 of the 4 amd64 nodes to i386. (h01ger)

Package reviews

92 reviews have been removed, 66 added and 31 updated in the previous week.

New issues: timestamps_generated_by_xbean_spring, timestamps_generated_by_mangosdk_spiprocessor.

Chris Lamb filed 7 FTBFS bugs.

Misc.

On March 20th, Chris Lamb gave a talk at FOSSASIA 2016 in Singapore.

The very same day, but a few timezones apart, h01ger did a presentation at LibrePlanet 2016 in Cambridge, Massachusetts.

Seven GSoC/Outreachy applications were made by potential interns to work on various aspects of the reproducible builds effort. On top of interacting with several applicants, prospective mentors gathered to review the applications.

Petter Reinholdtsen: A French paperback edition of the book Free Culture by Lawrence Lessig is now available

12 April, 2016 - 15:40

I'm happy to report that the French paperback edition of my project to translate the Free Culture book by Lawrence Lessig is now available for sale on Lulu.com. Once I have formally verified my proof reading copy, which should be in the mail, the paperback edition should be available in book stores like Amazon and Barnes & Noble too.

This French edition, Culture Libre, is the work of the dblatex developer Benoît Guillon, who created the PO file from the initial translation available from the Wikilivres wiki pages and completed and corrected the translation to match the original docbook edition my project is using, as well as coordinated the proof reading of the final result. I believe the end result look great, but I am biased and do not read French. In addition to the paperback edition, the book is available in PDF, EPUB and Mobi format from the github project page linked to above.

When enabling book store distribution on Lulu.com, I had to nearly triple the price to allow the book stores some profit. I also had to accept that I will get some revenue when a book is sold via Lulu.com. But because of the non-commercial clause in the book license (CC-BY-NC), this might be a problem. To bypass the problem I discussed how to handle the revenue with the author, and we agreed that the revenue for these editions go to the Creative Commons non-profit Corporation who handle donations to the Creative Commons project. So far they have earned around USD 70 on sales of the English and Norwegian Bokmål editions, according to Lulu.com. They will get the revenue for the French edition too. Their revenue is higher if you buy the book directly from Lulu.com instead of via a book store, so I recommend you buy directly from Lulu.com.

Perhaps you would like to get the book published in your language? The translation is done using a web based translator service, so the technical bar to enter is fairly low. Get in touch if you would like to make this happen.

Thomas Goirand: Announcing validated Debian packages for Mitaka

12 April, 2016 - 04:45

Greetings! This is a (4 days delay) copy of the announce I made on the openstack-dev@lists.openstack.org on the 8th of April 2016.

I am overjoyed, thrilled and delighted to announce the release of the Debian packages for Mitaka.

All of the DefCore packages were validated successfully this morning through our package-only-based Tempest CI.

Content of this release
This release includes the following 23 services:
aodh 2.0.0
barbican 2.0.0
ceilometer 6.0.0
cinder 8.0.0
congress 3.0.0+dfsg1
designate 2.0.0
glance 12.0.0
gnocchi 2.0.2
heat 6.0.0
horizon 9.0.0
ironic 5.1.0
keystone 9.0.0
magnum 2.0.0
manila 2.0.0
mistral 2.0.0
murano 2.0.0
neutron 8.0.0
nova 13.0.0
trove 5.0.0
sahara 4.0.0
senlin 1.0.0
swift 2.7.0
zaqar 2.0.0

Where to find these packages
1/ Sid
All of Mitaka was uploaded to Debian Sid this week. You can use Debian Sid directly to use them.

2/ Official jessie-backports
As soon as everything migrates to Debian Testing (currently aka: Stretch), in 5 days if no RC bug is reported, it will be possible to upload all of Mitaka to the Debian official jessie-backports.

3/ Non-official Jessie and Trusty backports
In the meantime, the packages are available through Mirantis Jenkins automatic Debian Jessie backport repository. The full sources.list is available here:

http://mitaka-jessie.pkgs.mirantis.com/

You can use the Trusty backports as well:

http://mitaka-trusty.pkgs.mirantis.com/

To use these repositories, simply add the described sources.list to (for example) /etc/apt/sources.list.d/openstack.list, and run apt-get update. If you want to install the GPG key of the repositories, you can either install the mitaka-jessie-archive-keyring or mitaka-trusty-archive-keyring package (depending on your distribution of choice). Alternatively “apt-key add” the public key available at /debian/dists/pukey.gpg in these repositories.

As a reminder, the URLs above contain the word “Mirantis” only because the service is sponsored by my employer. These repositories are “straight” backports from what is available in Debian Sid, without any modification.

Remember that the packages listed below are maintained separately in Debian and Ubuntu, and therefore, packages are different in these distributions:
aodh, barbican, ceilometer, cinder, designate, glance, heat, horizon, ironic, keystone, manila, neutron, nova, trove, swift.

All other packages (including all OpenStack libraries like Oslo and python-*clients) are maintained in Debian, with the contribution of Canonical, and then synced to Ubuntu, so they are the exact same packages (or at least, with a minimal difference). I hope we can further improve collaboration between Debian and Canonical during the Newton cycle.

Bug reporting
As always, bug reports are welcome, and considered as high value contributions. Please follow the instructions available at https://www.debian.org/Bugs/Reporting to report bugs to the Debian BTS.

Moving forward with higher QA and the Packaging-deb project in Newton
Currently, DefCore packages are tested through a package-only (ie: no puppet, chef, you-name-it… system management involved) Tempest CI. Results can be seen at:
https://mitaka-jessie.pkgs.mirantis.com/job/openstack-tempest-ci/

Though not all packages are included in this CI. It is my intention, during the Newton cycle, to also include services like Designate, Trove, Barbican, Congress, … in this CI. Individual upstream team for these services are more than welcome to approach us to get this happen quicker.

Also, as we’re slowing starting to get the Packaging-Deb project (ie: packaging using upstream OpenStack gerrit and gating), it is also in the pipe to use the above mentioned tempest CI system as a gate system for the packaging. Hopefully, this will lead us to a full CI/CD working from trunk. We also hope to be able to use these packages to help the Puppet team to test packaged OpenStack from trunk.

Greetings
On each release, I ask myself who I should thank. This time, I would like to thank everyone, because this release was overall very nice and working well. The whole OpenStack community is always very helpful and understand the requirements of downstream distributions. Guys, you’re awesome, I love my work, and I love working with you all!

Cheers,

Chris Lamb: Parsing Jenkins log output to determine job status

12 April, 2016 - 00:28

I recently made the same mistake a number of times when adding new hosts to my Ansible configuration and decided to ensure it couldn't happen again. The specifics of this particular issue were that whilst I had added the hostname to inventory file, I had neglected to add the host to the relevant group in my playbook:

oldhost
newhost

[mygroup]
oldhost
# missing newhost here

ansible-playbook would output no hosts matched but crucially return with an successful exit code. My continuous integration system (Jenkins) would infer that the task was successful and not notify me that anything was wrong:

$ ansible-playbook deploy-mygroup.yml --limit-hosts=newhost
<snip>
PLAY [deploy] *************************************************
skipping: no hosts matched

PLAY RECAP ****************************************************

[ERROR]: No plays were matched by any host.

$ echo $?
0

This seemed to violate a few principles to me (at the very least due of the "loud" use of ERROR without the corresponding return code) so I filed a pull request against Ansible that added an optional --error-if-no-plays-matched switch:

$ ansible-playbook [..] --error-if-no-plays-matched
<snip>
[ERROR]: No plays were matched by any host.

$ echo $?
1

In the end, upstream decided to pass on it as it could be implemented via a plugin system and desiring an immediate and potentially more-general solution I briefly looked into parsing the ansible-playbook output before moving onto parsing the Jenkins log itself.

This turned out to be straightforward; using the Text-Finder plugin, I configured my Jenkins job to simply error if the log contained the string skipping: no hosts matched:

I am using the Job DSL plugin so that my configuration is backed onto revision control (highly recommended) so I actually used its textFinder publisher rather than the interface above:

publishers {
  textFinder(/^skipping: no hosts matched$/, '', true, false, false)
}

This results in the job "correctly" failing and alerting me:

+ ansible-playbook deploy-mygroup.yml --limit-hosts=newhost
<snip>

PLAY [deploy] *************************************************
skipping: no hosts matched

PLAY RECAP ****************************************************

[ERROR]: No plays were matched by any host.

Checking console output
/var/lib/jenkins/jobs/deploy-mygroup/builds/126/log:
skipping: no hosts matched
Build step 'Jenkins Text Finder' changed build result to FAILURE
Finished: FAILURE

Peter Eisentraut: Some git log tweaks

11 April, 2016 - 19:00

Here are some tweaks to git log that I have found useful. It might depend on the workflow of individual projects how applicable this is.

Git stores separate author and committer information for each commit. How these are generated and updated is sometimes mysterious but generally makes sense. For example, if you cherry-pick a commit to a different branch, the author information stays the same but the committer information is updated. git log defaults to showing the author information. But I generally care less about that than the committer information, because I’m usually interested in when the commit arrived in my or the public repository, not when it was initially thought about. So let’s try to change the default git log format to show the committer information instead. Again, depending on the project and the workflow, there can be other preferences.

To create a different default format for git log, first create a new format by setting the Git configuration item pretty.somename. I chose pretty.cmedium because it’s almost the same as the default medium but with the author information replaced by the committer information.

[pretty]
cmedium="format:%C(auto,yellow)commit %H%C(auto,reset)%nCommit:     %cn <%ce>%nCommitDate: %cd%n%n%w(0,4,4)%s%n%+b"

Unfortunately, the default git log formats are not defined in terms of these placeholders but are hardcoded in the source, so this is my best reconstruction using the available means.

You can use this as git log --pretty=cmedium, and you can set this as the default using

[format]
pretty=cmedium

If you find this useful and you’re the sort of person who is more interested in their own timeline than the author’s history, you might also like two more tweaks.

First, add %cr for relative date, so it looks like

[pretty]
cmedium="format:%C(auto,yellow)commit %H%C(auto,reset)%nCommit:     %cn <%ce>%nCommitDate: %cd (%cr)%n%n%w(0,4,4)%s%n%+b"

This adds a relative designation like “2 days ago” to the commit date.

Second, set

[log]
date=local

to have all timestamps converted to your local time.

When you put all this together, you turn this

commit e2c117a28f767c9756d2d620929b37651dbe43d1
Author: Paul Eggert <eggert@cs.ucla.edu>
Date:   Tue Apr 5 08:16:01 2016 -0700

into this

commit e2c117a28f767c9756d2d620929b37651dbe43d1
Commit:     Paul Eggert <eggert@cs.ucla.edu>
CommitDate: Tue Apr 5 11:16:01 2016 (3 days ago)

PS: If this is lame, there is always this: http://fredkschott.com/post/2014/02/git-log-is-so-2005/

Matthew Garrett: Making it easier to deploy TPMTOTP on non-EFI systems

11 April, 2016 - 12:59
I've been working on TPMTOTP a little this weekend. I merged a pull request that adds command-line argument handling, which includes the ability to choose the set of PCRs you want to seal to without rebuilding the tools, and also lets you print the base32 encoding of the secret rather than the qr code so you can import it into a wider range of devices. More importantly it also adds support for setting the expected PCR values on the command line rather than reading them out of the TPM, so you can now re-seal the secret against new values before rebooting.

I also wrote some new code myself. TPMTOTP is designed to be usable in the initramfs, allowing you to validate system state before typing in your passphrase. Unfortunately the initramfs itself is one of the things that's measured. So, you end up with something of a chicken and egg problem - TPMTOTP needs access to the secret, and the obvious thing to do is to put the secret in the initramfs. But the secret is sealed against the hash of the initramfs, and so you can't generate the secret until after the initramfs. Modify the initramfs to insert the secret and you change the hash, so the secret is no longer released. Boo.

On EFI systems you can handle this by sticking the secret in an EFI variable (there's some special-casing in the code to deal with the additional metadata on the front of things you read out of efivarfs). But that's not terribly useful if you're not on an EFI system. Thankfully, there's a way around this. TPMs have a small quantity of nvram built into them, so we can stick the secret there. If you pass the -n argument to sealdata, that'll happen. The unseal apps will attempt to pull the secret out of nvram before falling back to looking for a file, so things should just magically work.

I think it's pretty feature complete now, other than TPM2 support? That's on my list.

comments

Petter Reinholdtsen: Lets make a Norwegian Bokmål edition of The Debian Administrator's Handbook

11 April, 2016 - 04:20

During this weekends bug squashing party and developer gathering, we decided to do our part to make sure there are good books about Debian available in Norwegian Bokmål, and got in touch with the people behind the Debian Administrator's Handbook project to get started. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors.

The book is already available on paper in English, French and Japanese, and our goal is to get it available on paper in Norwegian Bokmål too. In addition to the paper edition, there are also EPUB and Mobi versions available. And there are incomplete translations available for many more languages.

Vincent Bernat: Testing network software with pytest and Linux namespaces

10 April, 2016 - 21:30

Started in 2008, lldpd is an implementation of IEEE 802.1AB-2005 (aka LLDP) written in C. While it contains some unit tests, like many other network-related software at the time, the coverage of those is pretty poor: they are hard to write because the code is written in an imperative style and tighly coupled with the system. It would require extensive mocking1. While a rewrite (complete or iterative) would help to make the code more test-friendly, it would be quite an effort and it will likely introduce operational bugs along the way.

To get better test coverage, the major features of lldpd are now verified through integration tests. Those tests leverage Linux network namespaces to setup a lightweight and isolated environment for each test. They run through pytest, a powerful testing tool.

pytest in a nutshell

pytest is a Python testing tool whose primary use is to write tests for Python applications but is versatile enough for other creative usages. It is bundled with three killer features:

  • you can directly use the assert keyword,
  • you can inject fixtures in any test function, and
  • you can parametrize tests.
Assertions

With unittest, the unit testing framework included with Python, and many similar frameworks, unit tests have to be encapsulated into a class and use the provided assertion methods. For example:

class testArithmetics(unittest.TestCase):
    def test_addition(self):
        self.assertEqual(1 + 3, 4)

The equivalent with pytest is simpler and more readable:

def test_addition():
    assert 1 + 3 == 4

pytest will analyze the AST and display useful error messages in case of failure. For further information, see Benjamin Peterson’s article.

Fixtures

A fixture is the set of actions performed in order to prepare the system to run some tests. With classic frameworks, you can only define one fixture for a set of tests:

class testInVM(unittest.TestCase):

    def setUp(self):
        self.vm = VM('Test-VM')
        self.vm.start()
        self.ssh = SSHClient()
        self.ssh.connect(self.vm.public_ip)

    def tearDown(self):
        self.ssh.close()
        self.vm.destroy()

    def test_hello(self):
        stdin, stdout, stderr = self.ssh.exec_command("echo hello")
        stdin.close()
        self.assertEqual(stderr.read(), b"")
        self.assertEqual(stdout.read(), b"hello\n")

In the example above, we want to test various commands on a remote VM. The fixture launches a new VM and configure an SSH connection. However, if the SSH connection cannot be established, the fixture will fail and the tearDown() method won’t be invoked. The VM will be left running.

Instead, with pytest, we could do this:

@pytest.yield_fixture
def vm():
    r = VM('Test-VM')
    r.start()
    yield r
    r.destroy()

@pytest.yield_fixture
def ssh(vm):
    ssh = SSHClient()
    ssh.connect(vm.public_ip)
    yield ssh
    ssh.close()

def test_hello(ssh):
    stdin, stdout, stderr = ssh.exec_command("echo hello")
    stdin.close()
    stderr.read() == b""
    stdout.read() == b"hello\n"

The first fixture will provide a freshly booted VM. The second one will setup an SSH connection to the VM provided as an argument. Fixtures are used through dependency injection: just give their names in the signature of the test functions and fixtures that need them. Each fixture only handle the lifetime of one entity. Whatever a dependent test function or fixture succeeds or fails, the VM will always be finally destroyed.

Parameters

If you want to run the same test several times with a varying parameter, you can dynamically create test functions or use one test function with a loop. With pytest, you can parametrize test functions and fixtures:

@pytest.mark.parametrize("n1, n2, expected", [
    (1, 3, 4),
    (8, 20, 28),
    (-4, 0, -4)])
def test_addition(n1, n2, expected):
    assert n1 + n2 == expected
Testing lldpd

The general plan for to test a feature in lldpd is the following:

  1. Setup two namespaces.
  2. Create a virtual link between them.
  3. Spawn a lldpd process in each namespace.
  4. Test the feature in one namespace.
  5. Check with lldpcli we get the expected result in the other.

Here is a typical test using the most interesting features of pytest:

@pytest.mark.skipif('LLDP-MED' not in pytest.config.lldpd.features,
                    reason="LLDP-MED not supported")
@pytest.mark.parametrize("classe, expected", [
    (1, "Generic Endpoint (Class I)"),
    (2, "Media Endpoint (Class II)"),
    (3, "Communication Device Endpoint (Class III)"),
    (4, "Network Connectivity Device")])
def test_med_devicetype(lldpd, lldpcli, namespaces, links,
                        classe, expected):
    links(namespaces(1), namespaces(2))
    with namespaces(1):
        lldpd("-r")
    with namespaces(2):
        lldpd("-M", str(classe))
    with namespaces(1):
        out = lldpcli("-f", "keyvalue", "show", "neighbors", "details")
        assert out['lldp.eth0.lldp-med.device-type'] == expected

First, the test will be executed only if lldpd was compiled with LLDP-MED support. Second, the test is parametrized. We will execute four distinct tests, one for each role that lldpd should be able to take as an LLDP-MED-enabled endpoint.

The signature of the test has four parameters that are not covered by the parametrize() decorator: lldpd, lldpcli, namespaces and links. They are fixtures. A lot of magic happen in those to keep the actual tests short:

  • lldpd is a factory to spawn an instance of lldpd. When called, it will setup the current namespace (setting up the chroot, creating the user and group for privilege separation, replacing some files to be distribution-agnostic, …), then call lldpd with the additional parameters provided. The output is recorded and added to the test report in case of failure. The module also contains the creation of the pytest.config.lldpd object that is used to record the features supported by lldpd and skip non-matching tests. You can read fixtures/programs.py for more details.

  • lldpcli is also a factory, but it spawns instances of lldpcli, the client to query lldpd. Moreover, it will parse the output in a dictionary to reduce boilerplate.

  • namespaces is one of the most interesting pieces. It is a factory for Linux namespaces. It will spawn a new namespace or refer to an existing one. It is possible to switch from one namespace to another (with with) as they are contexts. Behind the scene, the factory maintains the appropriate file descriptors for each namespace and switch to them with setns(). Once the test is done, everything is wipped out as the file descriptors are garbage collected. You can read fixtures/namespaces.py for more details. It is quite reusable in other projects2.

  • links contains helpers to handle network interfaces: creation of virtual ethernet link between namespaces, creation of bridges, bonds and VLAN, etc. It relies on the pyroute2 module. You can read fixtures/network.py for more details.

You can see an example of a test run on the Travis build for 0.9.2. Since each test is correctly isolated, it’s possible to run parallel tests with pytest -n 10 --boxed. To catch even more bugs, both the address sanitizer (ASAN) and the undefined behavior sanitizer (UBSAN) are enabled. In case of a problem, notably a memory leak, the faulty program will exit with a non-zero exit code and the associated test will fail.

  1. A project like cwrap would definitely help. However, it lacks support for Netlink and raw sockets that are essential in lldpd operations. 

  2. There are three main limitations in the use of namespaces with this fixture. First, when creating a user namespace, only root is mapped to the current user. With lldpd, we have two users (root and _lldpd). Therefore, the tests have to run as root. The second limitation is with the PID namespace. It’s not possible for a process to switch from one PID namespace to another. When you call setns() on a PID namespace, only children of the current process will be in the new PID namespace. The PID namespace is convenient to ensure everyone gets killed once the tests are terminated but you must keep in mind that /proc must be mounted in children only. The third limitation is that, for some namespaces (PID and user), all threads of a process must be part of the same namespace. Therefore, don’t use threads in tests. Use multiprocessing module instead. 

Russ Allbery: Largish haul

10 April, 2016 - 10:42

Let's see if I can scrounge through all of my now-organized directories of ebooks and figure out what I haven't recorded here yet. At least the paper books make that relatively easy, since I don't shelve them until I post them. (Yeah, yeah, I should actually make a database.)

Hugh Aldersey-Williams — Periodic Tales (nonfiction)
Sandra Ulbrich Almazan — SF Women A-Z (nonfiction)
Radley Balko — Rise of the Warrior Cop (nonfiction)
Peter V. Brett — The Warded Man (sff)
Lois McMaster Bujold — Gentleman Jole and the Red Queen (sff)
Fred Clark — The Anti-Christ Handbook Vol. 2 (nonfiction)
Dave Duncan — West of January (sff)
Karl Fogel — Producing Open Source Software (nonfiction)
Philip Gourevitch — We Wish to Inform You That Tomorrow We Will Be Killed With Our Families (nonfiction)
Andrew Groen — Empires of EVE (nonfiction)
John Harris — @ Play (nonfiction)
David Hellman & Tevis Thompson — Second Quest (graphic novel)
M.C.A. Hogarth — Earthrise (sff)
S.L. Huang — An Examination of Collegial Dynamics... (sff)
S.L. Huang & Kurt Hunt — Up and Coming (sff anthology)
Kameron Hurley — Infidel (sff)
Kevin Jackson-Mead & J. Robinson Wheeler — IF Theory Reader (nonfiction)
Rosemary Kirstein — The Lost Steersman (sff)
Rosemary Kirstein — The Language of Power (sff)
Merritt Kopas — Videogames for Humans (nonfiction)
Alisa Krasnostein & Alexandra Pierce (ed.) — Letters to Tiptree (nonfiction)
Mathew Kumar — Exp. Negatives (nonfiction)
Ken Liu — The Grace of Kings (sff)
Susan MacGregor — The Tattooed Witch (sff)
Helen Marshall — Gifts for the One Who Comes After (sff collection)
Jack McDevitt — Coming Home (sff)
Seanan McGuire — A Red-Rose Chain (sff)
Seanan McGuire — Velveteen vs. The Multiverse (sff)
Seanan McGuire — The Winter Long (sff)
Marc Miller — Agent of the Imperium (sff)
Randal Munroe — Thing Explainer (graphic nonfiction)
Marguerite Reed — Archangel (sff)
J.K. Rowling — Harry Potter: The Complete Collection (sff)
K.J. Russell — Tides of Possibility (sff anthology)
Robert J. Sawyer — Starplex (sff)
Bruce Schneier — Secrets & Lies (nonfiction)
Mike Selinker (ed.) — The Kobold Game to Board Game Design (nonfiction)
Douglas Smith — Chimerascope (sff collection)
Jonathan Strahan — Fearsome Journeys (sff anthology)
Nick Suttner — Shadow of the Colossus (nonfiction)
Aaron Swartz — The Boy Who Could Change the World (essays)
Caitlin Sweet — The Pattern Scars (sff)
John Szczepaniak — The Untold History of Japanese Game Developers I (nonfiction)
John Szczepaniak — The Untold History of Japanese Game Developers II (nonfiction)
Jeffrey Toobin — The Run of His Life (nonfiction)
Hayden Trenholm — Blood and Water (sff anthology)
Coen Teulings & Richard Baldwin (ed.) — Secular Stagnation (nonfiction)
Ursula Vernon — Book of the Wombat 2015 (graphic nonfiction)
Ursula Vernon — Digger (graphic novel)

Phew, that was a ton of stuff. A bunch of these were from two large StoryBundle bundles, which is a great source of cheap DRM-free ebooks, although still rather hit and miss. There's a lot of just fairly random stuff that's been accumulating for a while, even though I've not had a chance to read very much.

Vacation upcoming, which will be a nice time to catch up on reading.

Guido Günther: Debian Fun in March 2016

10 April, 2016 - 03:00
Debian LTS

March was the eleventh month I contributed to Debian LTS under the Freexian umbrella. In total I spent 13 hours (of allocated 11.00 + 5.25h from last month) working on preparing for wheezy-lts:

  • Uploaded aptdaemon to {old-,}stable-proposed-updates (#818006, #818007)
  • Fix CVE-2012-6700, CVE-2012-6769 CVE-2012-6768, in Wheezy's dhcpcd resulting in DSA-3534
  • Reach out to Debian's Xen and KVM maintainers, Xen's community manager and several company to asses LTS maintainability
  • Research and propose a possible way forward for QEMU and libvirt
  • Upload a backport of libvirt to wheezy-backports for that
  • Prepare a fix for Wheezy's gtk+3.0 for CVE-2013-7447 (#818090) and propose it for oldstable-p-u (#819362)
  • Looked into Wheezy's lxc and CVE-2015-1335 specifically and marking it as no-dsa after discussion with the security-team.
  • Make bin/support-ended.py support EOL dates
  • Review Antiones nss work for Wheezy and work on the corresponding update for Jessie in order (to be finished this month).
Other Debian things

Martín Ferrari: Come to SunCamp this May!

9 April, 2016 - 23:45

Do you fancy a hack-camp in a place like this?

As you might have heard by now, Ana (Guerrero) and I are organising a small Debian event this spring: the Debian SunCamp 2016.

It is going to be different to most other Debian events. Instead of a busy schedule of talks, SunCamp will focus on the hacking and socialising aspect.

We have tried to make this event the simplest event possible, both for organisers and attendees. There will be no schedule, except for the meal times at the hotel. But these can be ignored too, there is a lovely bar that serves snacks all day long, and plenty of restaurants and cafés around the village.

One of the things that makes the event simple, is that we have negotiated a flat price for accommodation that includes usage of all the facilities in the hotel, and optionally food. We will give you a booking code, and then you arrange your accommodation as you please, you can even stay longer if you feel like it!

The rooms are simple but pretty, and everything has been renovated very recently.

We are not preparing a talks programme, but we will provide the space and resources for talks if you feel inclined to prepare one.

You will have a huge meeting room, divided in 4 areas to reduce noise, where you can hack, have team discussions, or present talks.

Of course, some people will prefer to move their discussions to the outdoor area.

Or just skip the discussion, and have a good time with your Debian friends, playing pétanque, pool, air hockey, arcades, or board games.

Do you want to see more pictures? Check the full gallery

Debian SunCamp 2016

Hotel Anabel, LLoret de Mar, Province of Girona, Catalonia, Spain

May 26-29, 2016

Tempted already? Head to the wikipage and register now, it is only 7 weeks away!

Please reserve your room before the end of April. The hotel has reserved a number of rooms for us until that time. You can reserve a room after April, but we can't guarantee the hotel will still have free rooms.

Comment

Steve Kemp: Recycling old ideas ..

9 April, 2016 - 20:47

My previous blog post was about fuzzing and finding segfaults in GNU Awk. At the time of this update they still remain unfixed.

Reading about a new release of mutt I've seen a lot of complaints about how it handles HTML mail, by shelling out to lynx or w3m. As I have a vested interest in console based mail-clients I wanted to have a quick check to see how dangerous that could be. After all it wasn't so long ago that I discovered that printing a fingerprint of an SSH key could be dangerous, so the idea of parsing untrusted HTML is something I could see.

In fact back in 2005 I reported that some specific HTML could crash Mozilla's firefox. Due to some ordering issues my Firefox bug was eventually reported as a duplicate, and although it seemed to qualify for the Mozilla bug-bounty and a CVE assignment I never received any actual cash. Shame. I'd have been more interested in testing the browser if I had a cheque to hang on my wall (and never cash).

Anyway full-circle. Fuzzing the w3m console-based browser resulted in a bunch of segfaults when running this:

 w3m -dump $file.html

Anyway each of the two bugs I reported were fixed in a day or two, and both involved gnarly UTF-8/encoding transformations. Many thanks to Tatsuya Kinoshita for such prompt attention and excellent debugging skills.

And lynx? Still no segfaults. I'll leave the fuzzer running over the weekend and if there are no faults found by Monday I guess I'll move on to links.

Colin Watson: No more “Hash Sum Mismatch” errors

8 April, 2016 - 21:06

The Debian repository format was designed a long time ago. The oldest versions of it were produced with the help of tools such as dpkg-scanpackages and consumed by dselect access methods such as dpkg-ftp. The access methods just fetched a Packages file (perhaps compressed) and used it as an index of which packages were available; each package had an MD5 checksum to defend against transport errors, but being from a more innocent age there was no repository signing or other protection against man-in-the-middle attacks.

An important and intentional feature of the early format was that, apart from the top-level Packages file, all other files were static in the sense that, once published, their content would never change without also changing the file name. This means that repositories can be efficiently copied around using rsync without having to tell it to re-checksum all files, and it avoids network races when fetching updates: the repository you’re updating from might change in the middle of your update, but as long as the repository maintenance software keeps superseded packages around for a suitable grace period, you’ll still be able to fetch them.

The repository format evolved rather organically over time as different needs arose, by what one might call distributed consensus among the maintainers of the various client tools that consumed it. Of course all sorts of fields were added to the index files themselves, which have an extensible format so that this kind of thing is usually easy to do. At some point a Sources index for source packages was added, which worked pretty much the same way as Packages except for having a different set of fields. But by far the most significant change to the repository structure was the “package pools” project.

The original repository layout put the packages themselves under the dists/ tree along with the index files. The dists/ tree is organised by “suite” (modern examples of which would be “stable”, “stable-updates”, “testing”, “unstable”, “xenial”, “xenial-updates”, and so on). This meant that making a release of Debian tended to involve copying lots of data around, and implementing the “testing” suite would have been very costly. Package pools solved this problem by moving individual package files out of dists/ and into a new pool/ tree, allowing those files to be shared between multiple suites with only a negligible cost in disk space and mirror bandwidth. From a database design perspective this is obviously much more sensible. As part of this project, the original Debian “dinstall” repository maintenance scripts were replaced by “da-katie” or “dak”, which among other things used a new apt-ftparchive program to build the index files; this replaced dpkg-scanpackages and dpkg-scansources, and included its own database cache which made a big difference to performance at the scale of a distribution.

A few months after the initial implementation of package pools, Release files were added. These formed a sort of meta-index for each suite, telling APT which index files were available (main/binary-i386/Packages, non-free/source/Sources, and so on) and what their checksums were. Detached signatures were added alongside that (Release.gpg) so that it was now possible to fetch packages securely given a public key for the repository, and client-side verification support for this eventually made its way into Debian and Ubuntu. The repository structure stayed more or less like this for several years.

At some point along the way, those of us by now involved in repository maintenance realised that an important property had been lost. I mentioned earlier that the original format allowed race-free updates, but this was no longer true with the introduction of the Release file. A client now had to fetch Release and then fetch whichever other index files such as Packages they wanted, typically in separate HTTP transactions. If a client was unlucky, these transactions would fall on either side of a mirror update and they’d get a “Hash Sum Mismatch” error from APT. Worse, if a mirror was unlucky and also didn’t go to special lengths to verify index integrity (most don’t), its own updates could span an update of its upstream mirror and then all its clients would see mismatches until the next mirror update. This was compounded by using detached signatures, so Release and Release.gpg were fetched separately and could be out of sync.

Fixing this has been a long road (the first time I remember talking about this was in late 2007!), and we’ve had to take care to maintain client/server compatibility along the way. The first step was to add inline-signed versions of the Release file, called InRelease, so that there would no longer be a race between fetching Release and fetching its signature. APT has had this for a while, Debian’s repository supports it as of stretch, and we finally implemented it for Ubuntu six months ago. Dealing with the other index files is more complicated, though; it isn’t sensible to inline them, as clients usually only need to fetch a small fraction of all the indexes available for a given suite.

The solution we’ve ended up with, thanks to Michael Vogt’s work implementing it in APT, is called by-hash and should be familiar in concept to people who’ve used git: with the exception of the top-level InRelease file, index files for suites that support the by-hash mechanism may now be fetched using a URL based on one of their hashes listed in InRelease. This means that clients can now operate like this:

  • Fetch dists/xenial/InRelease
  • Fetch dists/xenial/main/binary-amd64/by-hash/SHA256/46316a202cdae76a73b555414741b11d08c66620b76c470a1623cedcc8a14740 (and so on)
  • Fetch individual package files

This is now enabled by default in Ubuntu. It’s only there as of xenial (16.04), since earlier versions of Ubuntu don’t have the necessary support in APT. With this, hash mismatches on updates should be a thing of the past.

There will still be some people who won’t yet benefit from this. debmirror doesn’t support by-hash yet; apt-cacher-ng only supports it as of xenial, although there’s an easy configuration workaround. Full archive mirrors must make sure that they put new by-hash files in place before new InRelease files (I just fixed our recommended two-stage sync script to do this; ubumirror still needs some work; Debian’s ftpsync is almost correct but needs a tweak for its handling of translation files, which I’ve sent to its maintainers). Other mirrors and proxies that have specific handling of the repository format may need similar changes.

Please let me know if you see strange things happening as a result of this change. It’s useful to check the output of apt -o Debug::Acquire::http=true update to see exactly what requests are being issued.

Petter Reinholdtsen: One in two hundred Debian users using ZFS on Linux?

8 April, 2016 - 03:30

Just for fun I had a look at the popcon number of ZFS related packages in Debian, and was quite surprised with what I found. I use ZFS myself at home, but did not really expect many others to do so. But I might be wrong.

According to the popcon results for spl-linux, there are 1019 Debian installations, or 0.53% of the population, with the package installed. As far as I know the only use of the spl-linux package is as a support library for ZFS on Linux, so I use it here as proxy for measuring the number of ZFS installation on Linux in Debian. In the kFreeBSD variant of Debian the ZFS feature is already available, and there the popcon results for zfsutils show 1625 Debian installations or 0.84% of the population. So I guess I am not alone in using ZFS on Debian.

But even though the Debian project leader Lucas Nussbaum announced in April 2015 that the legal obstacles blocking ZFS on Debian were cleared, the package is still not in Debian. The package is again in the NEW queue. Several uploads have been rejected so far because the debian/copyright file was incomplete or wrong, but there is no reason to give up. The current status can be seen on the team status page, and the source code is available on Alioth.

As I want ZFS to be included in next version of Debian to make sure my home server can function in the future using only official Debian packages, and the current blocker is to get he debian/copyright file accepted by the FTP masters in Debian, I decided a while back to try to help out the team. This was the background for my blog post about creating, updating and checking debian/copyright semi-automatically, and I used the techniques I explored there to try to find any errors in the copyright file. It is not very easy to check every one of the around 2000 files in the source package, but I hope we this time got it right. If you want to help out, check out the git source and try to find missing entries in the debian/copyright file.

Dirk Eddelbuettel: RcppArmadillo 0.6.700.3.0

7 April, 2016 - 23:16

A new Armadillo release 6.700.3 is out, and we uploaded RcppArmadillo 0.6.700.3.0 to CRAN and Debian. This followed the usual thorough reverse-dependecy checking of by now 216 packages using.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

Changes in this release are as follows:

Changes in RcppArmadillo version 0.6.700.3.0 (2016-04-05)
  • Upgraded to Armadillo 6.700.3 (Catabolic Amalgamator Deluxe)

    • added logmat() for calcuating the matrix logarithm

    • added regspace() for generating vectors with regularly spaced elements

    • added logspace() for generating vectors with logarithmically spaced elements

    • added approx_equal() for determining approximate equality

    • added trapz() for numerical integration

    • expanded .save() and .load() with hdf5_binary_trans file type, to save/load data with columns transposed to rows

Courtesy of CRANberries, there is also a diffstat report for this release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Arturo Borrero González: Entering the Debian NM process

7 April, 2016 - 23:00

This week I've entered the Debian NM process to move from Debian Maintainer (DM) to Debian Developer (DD).

But, what have I been doing for Debian lastly?

I've been DM for the last year, after a couple of years maintaining packages with sponsors.

Since 2015 until this time of the 2016 year, I've done roughly 33 package uploads, opened 67 bugs and contributed to many others. I maintain and co-maintain now 9 packages, most of them Netfilter-related.

This is a graph of bugs assigned to my packages in the last natural year:


I was supported to start the process by Anibal Monsalve, and Vincent Cheng intermediately become by advocate.

The duration of the NM process can vary depending on a number of factors, from a couple of months to a couple of years.

BTW, I got my opened bug statistics with this small script: deb_bugs_years.sh

Charles Plessy: FreeDesktop entries enter in the Debian Policy.

7 April, 2016 - 20:03

After a long journey that lasted almost three years, the use of FreeDesktop entries is now documented in our Policy.

An as a bonus, this new version 3.9.8 of the Policy also reminds that new media types should be registered to the IANA.

Thanks to everybody who made this possible.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้