Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 46 min ago

Jonathan McDowell: C, floating point, and help!

31 August, 2017 - 23:58

Floating point is a pain. I know this. But I recently took over the sigrok packages in Debian and as part of updating to the latest libsigkrok4 library enabled the post compilation tests. Which promptly failed on i386. Some narrowing down of the problem leads to the following test case (which fails on both gcc-6 under Debian/Stretch and gcc-7 on Debian/Testing):

#include <inttypes.h>
#include <stdio.h>
#include <stdint.h>

int main(int argc, char *argv[])
{
        printf("%" PRIu64 "\n", (uint64_t)((1.034567) * (uint64_t)(1000000ULL)));
}

We expect to see 1.034567 printed out. On x86_64 we do:

$ arch
x86_64
$ gcc -Wall t.c -o t ; ./t
1034567

If we compile for 32-bit the result is also as expected:

$ gcc -Wall -m32 t.c -o t ; ./t
1034567

Where things get interesting is when we enable --std=c99:

$ gcc -Wall --std=c99 t.c -o t ; ./t
1034567
$ gcc -Wall -m32 --std=c99 t.c -o t ; ./t
1034566

What? It turns out all the cXX standards result in the last digit incorrectly being 6, while the gnuXX standards (gnu11 is apparently the default) result in the correct trailing 7. Is there some postfix I can add to the value to prevent the floating point truncation taking place? Or do I just have to accept this? It works fine on armel, so it’s not a simple 32/64 bit issue.

Gunnar Wolf: gwolf.blog.fork()

31 August, 2017 - 23:21

Ohai,

I have recently started to serve as a Feature Editor for the ACM XRDS magazine. As such, I was also invited to post some general blog posts on XRDS blog — And I just started yesterday by posting about DebConf.

I'm not going to pull (or mention) each of my posts in my main blog, nor will I syndicate it to Planet Debian (where most of my readership comes from), although I did add it to my dlvr.it account (that relays my posts to Twitter and Facebook, for those of you that care about said services). This mention is a one-off thing.

So, if you want to see yet another post explaining what is DebConf and what is Debian to the wider public, well... That's what I came up with :)

Gunnar Wolf: gwolf.blog.fork()

31 August, 2017 - 22:54

Ohai,

I have recently started to serve as a Feature Editor for the ACM XRDS magazine. As such, I was also invited to post some general blog posts on XRDS blog — And I just started yesterday by posting about DebConf.

I'm not going to pull (or mention) each of my posts in my main blog, nor will I syndicate it to Planet Debian (where most of my readership comes from), although I did add it to my dlvr.it account (that relays my posts to Twitter and Facebook, for those of you that care about said services). This mention is a one-off thing.

So, if you want to see yet another post explaining what is DebConf and what is Debian to the wider public, well... Thate's what I came up with :)

Martin-&#201;ric Racine: Firefox slow as HEL even after boosting RAM from 4GB to 16GB

31 August, 2017 - 17:33
The title says it all: Firefox is the only piece of software that shows zero sign of improvement in its performance after quadrupling the amount of RAM installed on my 64-bit desktop computer. All other applications show a significant boost in performance. Yet, among all the applications I use, Firefox is the one that most needed this speed boost. To say that this is a major disapointment is an understatement. FFS, Mozilla devs!

Kees Cook: GRUB and LUKS

31 August, 2017 - 00:27

I got myself stuck yesterday with GRUB running from an ext4 /boot/grub, but with /boot inside my LUKS LVM root partition, which meant GRUB couldn’t load the initramfs and kernel.

Luckily, it turns out that GRUB does know how to mount LUKS volumes (and LVM volumes), but all the instructions I could find talk about setting this up ahead of time (“Add GRUB_ENABLE_CRYPTODISK=y to /etc/default/grub“), rather than what the correct manual GRUB commands are to get things running on a failed boot.

These are my notes on that, in case I ever need to do this again, since there was one specific gotcha with using GRUB’s cryptomount command (noted below).

Available devices were the raw disk (hd0), the /boot/grub partition (hd0,msdos1), and the LUKS volume (hd0,msdos5):

grub> ls
(hd0) (hd0,msdos1) (hd0,msdos5)

Then use cryptomount to open the LUKS volume (but without ()s! It says it works if you use parens, but then you can’t use the resulting (crypto0)):

grub> insmod luks
grub> cryptomount hd0,msdos5
Enter password...
Slot 0 opened.

Then you can load LVM and it’ll see inside the LUKS volume:

grub> insmod lvm
grub> ls
(crypto0) (hd0) (hd0,msdos1) (hd0,msdos5) (lvm/rootvg-rootlv)

And then I could boot normally:

grub> configfile $prefix/grub.cfg

After booting, I added GRUB_ENABLE_CRYPTODISK=y to /etc/default/grub and ran update-grub. I could boot normally after that, though I’d be prompted twice for the LUKS passphrase (once by GRUB, then again by the initramfs).

To avoid this, it’s possible to add a second LUKS passphrase, contained in a file in the initramfs, as described here and works for Ubuntu and Debian too. The quick summary is:

Create the keyfile and add it to LUKS:

# dd bs=512 count=4 if=/dev/urandom of=/crypto_keyfile.bin
# chmod 0400 /crypto_keyfile.bin
# cryptsetup luksAddKey /dev/sda5 /crypto_keyfile.bin
*enter original password*

Adjust the /etc/crypttab to include passing the file via /bin/cat:

sda5_crypt UUID=4aa5da72-8da6-11e7-8ac9-001cc008534d /crypto_keyfile.bin luks,keyscript=/bin/cat

Add an initramfs hook to copy the key file into the initramfs, keep non-root uses from being able to read your initramfs, and trigger a rebuild:

# cat > /etc/initramfs-tools/hooks/crypto_keyfile <<EOF
#!/bin/bash
cp /crypto_keyfile.bin "${DESTDIR}"
EOF
# chmod a+x /etc/initramfs-tools/hooks/crypto_keyfile
# chmod 0700 /boot
# update-initramfs -u

This has the downside of leaving a LUKS passphrase “in the clear” while you’re booted, but if someone has root, they can just get your dm-crypt encryption key directly anyway:

# dmsetup table --showkeys sda5_crypt
0 155797496 crypt aes-cbc-essiv:sha256 e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 8:5 2056

And of course if you’re worried about Evil Maid attacks, you’ll need a real static root of trust instead of doing full disk encryption passphrase prompting from an unverified /boot partition. :)

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Daniel Silverstone: STM32 USB and Rust - Packet Memory Area

30 August, 2017 - 23:15

In this, our next exciting installment of STM32 and Rust for USB device drivers, we're going to look at what the STM32 calls the 'packet memory area'. If you've been reading along with the course, including reading up on the datasheet content then you'll be aware that as well as the STM32's normal SRAM, there's a 512 byte SRAM dedicated to the USB peripheral. This SRAM is called the 'packet memory area' and is shared between the main bus and the USB peripheral core. Its purpose is, simply, to store packets in transit. Both those IN to the host (so stored queued for transmission) or OUT from the host (so stored, queued for the application to extract and consume).

It's time to actually put hand to keyboard on some Rust code, and the PMA is the perfect starting point, since it involves two basic structures. Packets are the obvious first structure, and they are contiguous sets of bytes which for the purpose of our work we shall assume are one to sixty-four bytes long. The second is what the STM32 datasheet refers to as the BTABLE or Buffer Descriptor Table. Let's consider the BTABLE first.

The Buffer Descriptor Table

The BTABLE is arranged in quads of 16bit words. For "normal" endpoints this is a pair of descriptors, each consisting of two words, one for transmission, and one for reception. The STM32 also has a concept of double buffered endpoints, but we're not going to consider those in our proof-of-concept work. The STM32 allows for up to eight endpoints (EP0 through EP7) in internal register naming, though they support endpoints numbered from zero to fifteen in the sense of the endpoint address numbering. As such there're eight descriptors each four 16bit words long (eight bytes) making for a buffer descriptor table which is 64 bytes in size at most.

Buffer Descriptor Table Byte offset in PMA Field name Description (EPn * 8) + 0 USB_ADDRn_TX The address (inside the PMA) of the TX buffer for EPn (EPn * 8) + 2 USB_COUNTn_TX The number of bytes present in the TX buffer for EPn (EPn * 8) + 4 USB_ADDRn_RX The address (inside the PMA) of the RX buffer for EPn (EPn * 8) + 6 USB_COUNTn_RX The number of bytes of space available for the RX buffer for EPn (and once received, the number of bytes received)

The TX entries are trivial to comprehend. To transmit a packet, part of the process involves writing the packet into the PMA, putting the address into the appropriate USB_ADDRn_TX entry, and the length into the corresponding USB_COUNTn_TX entry, before marking the endpoint as ready to transmit.

To receive a packet though is slightly more complex. The application must allocate some space in the PMA, setting the address into the USB_ADDRn_RX entry of the BTABLE before filling out the top half of the USB_COUNTn_RX entry. For ease of bit sizing, the STM32 only supports space allocations of two to sixty-two bytes in steps of two bytes at a time, or thirty-two to five-hundred-twelve bytes in steps of thirty-two bytes at a time. Once the packet is received, the USB peripheral will fill out the lower bits of the USB_COUNTn_RX entry with the actual number of bytes filled out in the buffer.

Packets themselves

Since packets are, typically, a maximum of 64 bytes long (for USB 2.0) and are simply sequences of bytes with no useful structure to them (as far as the USB peripheral itself is concerned) the PMA simply requires that they be present and contiguous in PMA memory space. Addresses of packets are relative to the base of the PMA and are byte-addressed, however they cannot start on an odd byte, so essentially they are 16bit addressed. Since the BTABLE can be anywhere within the PMA, as can the packets, the application will have to do some memory management (either statically, or dynamically) to manage the packets in the PMA.

Accessing the PMA

The PMA is accessed in 16bit word sections. It's not possible to access single bytes of the PMA, nor is it conveniently structured as far as the CPU is concerned. Instead the PMA's 16bit words are spread on 32bit word boundaries as far as the CPU knows. This is done for convenience and simplicity of hardware, but it means that we need to ensure our library code knows how to deal with this.

First up, to convert an address in the PMA into something which the CPU can use we need to know where in the CPU's address space the PMA is. Fortunately this is fixed at 0x4000_6000. Secondly we need to know what address in the PMA we wish to access, so we can determine which 16bit word that is, and thus what the address is as far as the CPU is concerned. If we assume we only ever want to access 16bit entries, we can just multiply the PMA offset by two before adding it to the PMA base address. So, to access the 16bit word at byte-offset 8 in the PMA, we'd look for the 16bit word at 0x4000_6000 + (0x08 * 2) => 0x4000_6010.

Bundling the PMA into something we can use

I said we'd do some Rust, and so we shall…

    // Thanks to the work by Jorge Aparicio, we have a convenient wrapper
    // for peripherals which means we can declare a PMA peripheral:
    pub const PMA: Peripheral<PMA> = unsafe { Peripheral::new(0x4000_6000) };

    // The PMA struct type which the peripheral will return a ref to
    pub struct PMA {
        pma_area: PMA_Area,
    }

    // And the way we turn that ref into something we can put a useful impl on
    impl Deref for PMA {
        type Target = PMA_Area;
        fn deref(&self) -> &PMA_Area {
            &self.pma_area
        }
    }

    // This is the actual representation of the peripheral, we use the C repr
    // in order to ensure it ends up packed nicely together
    #[repr(C)]
    pub struct PMA_Area {
        // The PMA consists of 256 u16 words separated by u16 gaps, so lets
        // represent that as 512 u16 words which we'll only use every other of.
        words: [VolatileCell<u16>; 512],
    }

That block of code gives us three important things. Firstly a peripheral object which we will be able to (later) manage nicely as part of the set of peripherals which RTFM will look after for us. Secondly we get a convenient packed array of u16s which will be considered volatile (the compiler won't optimise around the ordering of writes etc). Finally we get a struct on which we can hang an implementation to give our PMA more complex functionality.

A useful first pair of functions would be to simply let us get and put u16s in and out of that word array, since we're only using every other word…

    impl PMA_Area {
        pub fn get_u16(&self, offset: usize) -> u16 {
            assert!((offset & 0x01) == 0);
            self.words[offset].get()
        }

        pub fn set_u16(&self, offset: usize, val: u16) {
            assert!((offset & 0x01) == 0);
            self.words[offset].set(val);
        }
    }

These two functions take an offset in the PMA and return the u16 word at that offset. They only work on u16 boundaries and as such they assert that the bottom bit of the offset is unset. In a release build, that will go away, but during debugging this might be essential. Since we're only using 16bit boundaries, this means that the first word in the PMA will be at offset zero, and the second at offset two, then four, then six, etc. Since we allocated our words array to expect to use every other entry, this automatically converts into the addresses we desire.

If we pop (and please don't worry about the unsafe{} stuff for now):

    unsafe { (&*usb::pma::PMA.get()).set_u16(4, 64); }

into our main function somewhere, and then build and objdump our test binary we can see the following set of instructions added:

 80001e4:   f246 0008   movw    r0, #24584  ; 0x6008
 80001e8:   2140        movs    r1, #64 ; 0x40
 80001ea:   f2c4 0000   movt    r0, #16384  ; 0x4000
 80001ee:   8001        strh    r1, [r0, #0]

This boils down to a u16 write of 0x0040 (64) to the address 0x4006008 which is the third 32 bit word in the CPU's view of the PMA memory space (where offset 4 is the third 16bit word) which is exactly what we'd expect to see.

We can, from here, build up some functions for manipulating a BTABLE, though the most useful ones for us to take a look at are the RX counter functions:

    pub fn get_rxcount(&self, ep: usize) -> u16 {
        self.get_u16(BTABLE + (ep * 8) + 6) & 0x3ff
    }

    pub fn set_rxcount(&self, ep: usize, val: u16) {
        assert!(val <= 1024);
        let rval: u16 = {
            if val > 62 {
                assert!((val & 0x1f) == 0);
                (((val >> 5) - 1) << 10) | 0x8000
            } else {
                assert!((val & 1) == 0);
                (val >> 1) << 10
            }
        };
        self.set_u16(BTABLE + (ep * 8) + 6, rval)
    }

The getter is fairly clean and clear, we need the BTABLE base in the PMA, add the address of the USB_COUNTn_RX entry to that, retrieve the u16 and then mask off the bottom ten bits since that's the size of the relevant field.

The setter is a little more complex, since it has to deal with the two possible cases, this isn't pretty and we might be able to write some better peripheral structs in the future, but for now, if the length we're setting is 62 or less, and is divisible by two, then we put a zero in the top bit, and the number of 2-byte lumps in at bits 14:10, and if it's 64 or more, we mask off the bottom to check it's divisible by 32, and then put the count (minus one) of those blocks in, instead, and set the top bit to mark it as such.

Fortunately, when we set constants, Rust's compiler manages to optimise all this very quickly. For a BTABLE at the bottom of the PMA, and an initialisation statement of:

    unsafe { (&*usb::pma::PMA.get()).set_rxcount(1, 64); }

then we end up with the simple instruction sequence:

80001e4:    f246 001c   movw    r0, #24604  ; 0x601c
80001e8:    f44f 4104   mov.w   r1, #33792  ; 0x8400
80001ec:    f2c4 0000   movt    r0, #16384  ; 0x4000
80001f0:    8001        strh    r1, [r0, #0]

We can decompose that into a C like *((u16*)0x4000601c) = 0x8400 and from there we can see that it's writing to the u16 at 0x1c bytes into the CPU's view of the PMA, which is 14 bytes into the PMA itself. Since we know we set the BTABLE at the start of the PMA, it's 14 bytes into the BTABLE which is firmly in the EP1 entries. Specifically it's USB_COUNT1_RX which is what we were hoping for. To confirm this, check out page 651 of the datasheet. The value set was 0x8400 which we can decompose into 0x8000 and 0x0400. The first is the top bit and tells us that BL_SIZE is one, and thus the blocks are 32 bytes long. Next the 0x4000 if we shift it right ten places, we get the value 2 for the field NUM_BLOCK and multiplying 2 by 32 we get the 64 bytes we asked it to set as the size of the RX buffer. It has done exactly what we hoped it would, but the compiler managed to optimise it into a single 16 bit store of a constant value to a constant location. Nice and efficient.

Finally, let's look at what happens if we want to write a packet into the PMA. For now, let's assume packets come as slices of u16s because that'll make our life a little simpler:

    pub fn write_buffer(&self, base: usize, buf: &[u16]) {
        for (ofs, v) in buf.iter().enumerate() {
            self.set_u16(base + (ofs * 2), *v);
        }
    }

Yes, even though we're deep in no_std territory, we can still get an iterator over the slice, and enumerate it, getting a nice iterator of (index, value) though in this case, the value is a ref to the content of the slice, so we end up with *v to deref it. I am sure I could get that automatically happening but for now it's there.

Amazingly, despite using iterators, enumerators, high level for loops, function calls, etc, if we pop:

    unsafe { (&*usb::pma::PMA.get()).write_buffer(0, &[0x1000, 0x2000, 0x3000]); }

into our main function and compile it, we end up with the instruction sequence:

80001e4:    f246 0000   movw    r0, #24576  ; 0x6000
80001e8:    f44f 5180   mov.w   r1, #4096   ; 0x1000
80001ec:    f2c4 0000   movt    r0, #16384  ; 0x4000
80001f0:    8001        strh    r1, [r0, #0]
80001f2:    f44f 5100   mov.w   r1, #8192   ; 0x2000
80001f6:    8081        strh    r1, [r0, #4]
80001f8:    f44f 5140   mov.w   r1, #12288  ; 0x3000
80001fc:    8101        strh    r1, [r0, #8]

which, as you can see, ends up being three sequential halfword stores directly to the right locations in the CPU's view of the PMA. You have to love seriously aggressive compile-time optimisation

Hopefully, by next time, we'll have layered some more pleasant routines on our PMA code, and begun a foray into the setup necessary before we can begin handling interrupts and start turning up on a USB port.

Foteini Tsiami: Internationalization, part five: documentation and release!

30 August, 2017 - 14:27

LTSP Manager has just been announced in the ltsp-discuss mailing list! After long hours of testing and fixing issues that were related to the internationalization process, it’s now time to make it available to a broader audience. It’s currently available for Ubuntu and Debian via a PPA, and it will shortly get uploaded to Debian experimental.

We’ve documented all the necessary steps to install and maintain LTSP using LTSP Manager in the LTSP wiki: http://wiki.ltsp.org/wiki/Ltsp-manager. The initial documentation is deliberately concise so that new users can read all of it. We’ve also included best practices about user account management in schools etc.

This concludes all the tasks outlined by my Outreachy project, but of course not my involvement to LTSP Manager. I’ll keep using it in my schools and be an active part of its ecosystem. Many thanks to Debian Outreachy and to my mentors for the opportunity to work on this excellent project!


Wouter Verhelst: Re-encoding DebConf17 videos

30 August, 2017 - 12:52

Feedback we received after DebConf17 was that the quality of the released videos was rather low. Since I'd been responsible for encoding the videos, I investigated that, and found that I had used the video bitrate defaults of ffmpeg, which are rather low. So, I read some more documentation, and came up with better settings for the encoding to be done properly.

While at it, I found that the reason that VP9 encoding takes forever, as I'd found before, has everything to do with the same low-bitrate defaults, and is not an inherent problem to VP9 encoding. In light of that, I ran a test encode of a video with VP8 as well as VP9 encoding, and found that it shaves off some of the file size for about the same quality, with little to no increase in encoder time.

Unfortunately, the initial VP9 versions did not result in files that would play in some media players and in some browsers, but not in all of them. A quick email to the WebM-discuss mailinglist got me a reply, explaining that since our cameras used YUV422 encoding, which VP9 supports but some media players do not, and that since VP8 doesn't support that format, the result was VP9 files with YUV422 encoding, and VP8 ones with YUV420. The fix was to add a single extra command-line parameter to force the pixel format to YUV420 and tell ffmpeg to also downsample the video to that.

After adding another few parameters so as to ensure that some metadata would be added to the files as well, I've now told the system to re-encode the whole corpus of DebConf17 videos in both VP8 as well as VP9. It would appear that the VP9 files are, on average, about 1/4th to 1/3rd smaller than the equivalent VP8 files, with no apparent loss in quality.

Since some people might like them for whatever reason, I've not dropped the old lower-quality VP8 files, but instead moved them to an "lq" directory. They are usually about half the size of the VP9 files, but the quality is pretty terrible.

TLDR: if you downloaded videos from the debconf video archive, you may want to check whether it has been re-encoded yet (which would be the case if the state on that page is "done"), and if so, download them anew.

Dirk Eddelbuettel: RcppArmadillo 0.7.960.1.2

30 August, 2017 - 09:20

A second fix-up release is needed following on the recent bi-monthly RcppArmadillo release as well as the initial follow-up as it turns out that OS X / macOS is so darn special that it needs an entire separate treatment for OpenMP. Namely to turn it off entirely...

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 384 other packages on CRAN---an increase of 54 since the CRAN release in June!

Changes in RcppArmadillo version 0.7.960.1.2 (2017-08-29)
  • On macOS, OpenMP support is now turned off (#170).

  • The package is now compiling under the C++11 standard (#170).

  • The vignette dependency is correctly set (James and Dirk in #168 and #169)

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Colin Watson: env —chdir

30 August, 2017 - 06:54

I was recently asked to sort things out so that snap builds on Launchpad could themselves install snaps as build-dependencies. To make this work we need to start doing builds in LXD containers rather than in chroots. As a result I’ve been doing some quite extensive refactoring of launchpad-buildd: it previously had the assumption that it was going to use a chroot for everything baked into lots of untested helper shell scripts, and I’ve been rewriting those in Python with unit tests and with a single Backend abstraction that isolates the high-level logic from the details of where each build is being performed.

This is all interesting work in its own right, but it’s not what I want to talk about here. While I was doing all this refactoring, I ran across a couple of methods I wrote a while back which looked something like this:

def chroot(self, args, echo=False):
    """Run a command in the chroot.

    :param args: the command and arguments to run.
    """
    args = set_personality(
        args, self.options.arch, series=self.options.series)
    if echo:
        print("Running in chroot: %s" %
              ' '.join("'%s'" % arg for arg in args))
        sys.stdout.flush()
    subprocess.check_call([
        "/usr/bin/sudo", "/usr/sbin/chroot", self.chroot_path] + args)

def run_build_command(self, args, env=None, echo=False):
    """Run a build command in the chroot.

    This is unpleasant because we need to run it in /build under sudo
    chroot, and there's no way to do this without either a helper
    program in the chroot or unpleasant quoting.  We go for the
    unpleasant quoting.

    :param args: the command and arguments to run.
    :param env: dictionary of additional environment variables to set.
    """
    args = [shell_escape(arg) for arg in args]
    if env:
        args = ["env"] + [
            "%s=%s" % (key, shell_escape(value))
            for key, value in env.items()] + args
    command = "cd /build && %s" % " ".join(args)
    self.chroot(["/bin/sh", "-c", command], echo=echo)

(I’ve already replaced the chroot method with a call to Backend.run, but it’s easier to see what I’m talking about in the original form.)

One thing to notice about this code is that it uses several adverbial commands: that is, commands that run another command in a different way. For example, sudo runs another command as another user, while chroot runs another command with a different root directory, and env runs another command with different environment variables set. These commands chain neatly, and they also have the useful property that they take the subsidiary command and its arguments as a list of arguments. coreutils has several other commands that behave this way, and adverbio is another useful example.

By contrast, su -c is something you might call a “quasi-adverbial” command: it does modify the behaviour of another command, but it takes it as a single argument which it then passes to sh -c. Every time you have something that’s passed to a shell like this, you need a corresponding layer of shell quoting to escape any shell metacharacters that should be interpreted literally. This is often cumbersome and is easy to get wrong. My Python implementation is as follows, and I wouldn’t be totally surprised to discover that it contained a bug:

import re

non_meta_re = re.compile(r'^[a-zA-Z0-9+,./:=@_-]+$')

def shell_escape(arg):
    if non_meta_re.match(arg):
        return arg
    else:
        return "'%s'" % arg.replace("'", "'\\''")

Python >= 3.3 has shlex.quote, which is an improvement and we should probably use that instead, but it’s still another thing to forget to call. This is why process-spawning libraries such as Python’s subprocess, Perl’s system and open, and my own libpipeline for C encourage programmers to use a list syntax and to avoid involving the shell entirely wherever possible.

One thing that the standard Unix tools don’t let you do in an adverbial way is to change your working directory, and I’ve run into this annoying limitation several times. This means that it’s difficult to chain that operation together with other adverbs, for example to run a command in a particular working directory inside a chroot. The workaround I used above was to invoke a shell that runs cd /build && ..., but that’s another command that’s only quasi-adverbial, since the extra shell means an extra layer of shell quoting.

(Ian Jackson rightly observes that you can in fact write the necessary adverb as something like sh -ec 'cd "$1"; shift; exec "$@"' chdir. I think that’s a bit uglier than I ideally want to use in production code, but you might reasonably think that it’s worth it to avoid the extra layer of shell quoting.)

I therefore decided that this was a feature that belonged in coreutils, and after a bit of mailing list discussion we felt it was best implemented as a new option to env(1). I sent a patch for this which has been accepted. This means that we have a new composable adverb, env --chdir=NEWDIR, which will allow the run_build_command method above to be rewritten as something like this:

def run_build_command(self, args, env=None, echo=False):
    """Run a build command in the chroot.

    :param args: the command and arguments to run.
    :param env: dictionary of additional environment variables to set.
    """
    env_args = ["env", "--chdir=/build"]
    if env:
        for key, value in env.items():
            env_args.append("%s=%s" % (key, value))
    self.chroot(env_args + args, echo=echo)

The env --chdir option will be in coreutils 8.28. We won’t be able to use it in launchpad-buildd until that’s available in all Ubuntu series we might want to build for, so in this particular application that’s going to take a few years; but other applications may well be able to make use of it sooner.

Holger Levsen: 20170829-qubes-os

30 August, 2017 - 05:46
"Qubes OS from the POV of a Debian developer" and "Qubes OS user meetup at Bornhack"

I wrote the following while on my way home from Bornhack which was an awesome hacking camp on the Danish island of Bornholm, where about 200 people gathered for a week, with a nice beach in walking distance (and a not too cold Baltic Sea) and vegan and not so vegan grills, to give some hints why it was awesome. (Actually it was mostly awesome due to the people there, not the things, but anyway…)

And there were of course also talks and workshops, and one was a Qubes OS users meetup, which amazingly was attended by 20 Qubes OS users (and some Qubes OS users present were even missing), so in other words: Qubes OS had a >10% user base at this event! And I even found one heads user!

At DebConf17 I gave a talk titled "Qubes OS from the POV of a Debian developer" and while the video was immediatly available (thanks to the DebConf videoteam!) I've also now put my slides for it online.

Since then, I learned a few things:

  • I should have mentioned Standalone-VM (=non-template based VMs) in the talk, as those are also possible, and sometimes handy.
  • IPv6 works with manual fiddling… (but this is undocumented)
  • I've given Salt (for configuation management) a short try (with the help of an experienced Salt users, thanks nodens!) and I must say I'm not impressed yet. But Qubes 4.0 will bring huge changes to this area (including introducing an AdminVM), so while I will not use Salt for the time I'll still be using Qubes 3.2, I will come back and look at this later.
  • after adding a single line containing "iwldvm iwlwifi" to /rw/config/suspend-module-blacklist in the NetVM the wireless comes back nicely (on my X230) after Suspend/Resume.

I'm looking forward to more Qubes OS user meetups in future!

Carl Chenet: Send scheduled messages to both Twitter and Mastodon with the Remindr bot

30 August, 2017 - 05:00

Do you need to send messages to both Twitter and Mastodon? Use the Remindr bot! Remindr is written in Python, released under the GPLv3 license.

1. How to install Remindr

Install Remindr from PyPI:

# pip3 install remindr
2. How to configure Remindr

First, start by writing a messages.txt file with the following content:

o en Send scheduled messages to both Twitter and Mastodon with the Remindr bot https://carlchenet.com/send-scheduled-messages-to-both-twitter-and-mastodon-with-the-remindr-bot #remindr #twitter #mastodon
x en Follow Carl Chenet for great news about Free Software! https://carlchenet.com #freesoftware

The first field only indicates if the line is the next one to be considered by Remindr, the ‘o’ indicates the next line to be sent, ‘x’ means it won’t. The second field is the 2-letters code language your content is using, in my example en or fr. Next content on the line will compose the body of your messages to Mastodon and Twitter.

You need to configure the Mastodon and the Twitter credentials in order to allow Remindr to send the messages. First you need to generate the credentials. For Twitter, you need to manually create an app on apps.twitter.com. For Mastodon, just launch the following command:

$ register_remindr_app

Some information will be asked by the command. At the end, two files are created, remindr_usercred.txt and remindr_clientcred.txt. You’re going to need them for the Remindr configuration above.

For the Remindr configuration, here is a complete configuration using the

[mastodon]
instance_url=https://mastodon.social
user_credentials=remindr_usercred.txt
client_credentials=remindr_clientcred.txt

[twitter]
consumer_key=a6lv2gZxkvk6UbQ30N4vFmlwP
consumer_secret=j4VxM2slv0Ud4rbgZeGbBzPG1zoauBGLiUMOX0MGF6nsjcyn4a
access_token=1234567897-Npq5fYybhacYxnTqb42Kbb3A0bKgmB3wm2hGczB
access_token_secret=HU1sjUif010DkcQ3SmUAdObAST14dZvZpuuWxGAV0xFnC

[image]
path_to_image=/home/chaica/blog-carl-chenet.png

[entrylist]
path_to_list=/etc/remindr/messages.txt

Your configuration is complete! Now we have to check if everything is fine.

Read the full documentation on Readthedocs.

3. How to use Remindr

Now let’s try your configuration by launching Remindr the first time by-hand:

$ remindr -c /etc/remindr/remindr.ini

The messages should appear on both Twitter and Mastodon.

4. How to schedule the Remindr execution

The easiest way is to use you user crontab. Just add the following line in your crontab file, editing it with crontab -e

00 10 * * * remindr -c /etc/remindr/remindr.ini

From now on, your message will be sent every day at 10:00AM.

Going further with Remindr … and finally

You can help me developing tools for Mastodon and other social networks by donating anything through Liberaypay (also possible with cryptocurrencies). Any contribution will be appreciated. That’s a big factor motivation
Donate

You also may follow my account @carlchenet on Mastodon

Jeremy Bicha: GNOME Tweaks 3.25.91

30 August, 2017 - 04:52

The GNOME 3.26 release cycle is in its final bugfix stage before release.

Here’s a look at what’s new in GNOME Tweaks since my last post.

I’ve heard people say that GNOME likes to remove stuff. If that were true, how would there be anything left in GNOME? But maybe it’s partially true. And maybe it’s possible for removals to be a good thing?

Removal #1: Power Button Settings

The Power page in Tweaks 3.25.91 looks a bit empty. In previous releases, the Tweaks app had a “When the Power button is pressed” setting that nearly duplicated the similar setting in the Settings app (gnome-control-center). I worked to restore support for “Power Off” as one of its options. Since this is now in Settings 3.25.91, there’s no need for it to be in Tweaks any more.

Removal #2: Hi-DPI Settings

GNOME Tweaks offered a basic control to scale windows 2x for Hi-DPI displays. More advanced support is now in the Settings app. I suspect that fractional scaling won’t be supported in GNOME 3.26 but it’s something to look forward to in GNOME 3.28!

Removal #3 Global Dark Theme

I am announcing today that one of the oldest and popular tweaks will be removed from Tweaks 3.28 (to be released next March). Global Dark Theme is being removed because:

  • Changing the Global Dark Theme option required closing any currently running apps and reopening them to get the correct theme.
  • It didn’t work for sandboxed apps (Flatpak and Snap)
  • It only worked for gtk3 apps (it can’t work on gtk2 apps)
  • Some themes never supported a Dark variant. The switch wouldn’t do anything at all with a theme like that.

Adwaita now has a separate Adwaita Dark theme. Arc has 2 different dark variations.

Therefore, if you are a theme developer, you have about 6-7 months to offer a dark version of your theme. The dark version can be distributed the same way as your regular version.

Removal #4 Some letters from our name

In case you haven’t noticed, GNOME Tweak Tool is now GNOME Tweaks. This better matches the GNOME app naming style. Thanks Alberto Fanjul for this improvement!

For other details of what’s changed including a helpful scrollbar fix from António Fernandes, see the NEWS file.

Sean Whitton: Nourishment

29 August, 2017 - 22:35

This semester I am taking JPN 530, “Haruki Murakami and the Literature of Modern Japan”. My department are letting me count it for the Philosophy Ph.D., and in fact my supervisor is joining me for the class. I have no idea what the actual class sessions will be like—first one this afternoon—and I’m anxious about writing a literature term paper. But I already know that my weekends this semester are going to be great because I’ll be reading Murakami’s novels.

What’s particularly wonderful about this, and what I wanted to write about, is how nourishing I find reading literary fiction to be. For example, this weekend I read

This was something sure to be crammed full of warm secrets, like an antique clock built when peace filled the world. … Potentiality knocks at the door of my heart.

and I was fed for the day. All my perceived needs dropped away; that’s all it takes. This stands in stark contrast to reading philosophy, which is almost always draining rather than nourishing—even philosophy I really want to read. Especially having to read philosophy at the weekend.

(quotation is from On Seeing the 100% Perfect Girl on a Beautiful April Morning)

Reproducible builds folks: Reproducible Builds: Weekly report #122

29 August, 2017 - 22:13

Here's what happened in the Reproducible Builds effort between Sunday August 20 and Saturday August 26 2017:

Debian development
  • "Packages should build reproducibly" was released in Debian Policy 4.1.0.0. For more background please see last week's post.
  • A patch by Chris Lamb to make Dpkg::Substvars warnings output deterministic was merged by Guillem Jover. This helps the Reproducible Builds effort as it removes unnecessary differences in logs of two package builds. (#870221)
Packages reviewed and fixed, and bugs filed

Forwarded upstream:

Accepted repoducibility NMUs in Debian:

Other issues:

Reviews of unreproducible packages

16 package reviews have been added, 38 have been updated and 48 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (37)
  • Dmitry Shachnev (1)
  • James Cowgill (1)
diffoscope development disorderfs development

Version 0.5.2-1 was uploaded to unstable by Ximin Luo. It included contributions from:

reprotest development Misc.

This week's edition was written — in alphabetical order — by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Dirk Eddelbuettel: RcppSMC 0.2.0

29 August, 2017 - 09:38

A new version 0.2.0 of the RcppSMC package arrived on CRAN earlier today (as a very quick pretest-publish within minutes of submission).

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article.

This release 0.2.0 is chiefly the work of Leah South, a Ph.D. student at Queensland University of Technology, who was during the last few months a Google Summer of Code student mentored by Adam and myself. It was pleasure to work with Leah on this, and see her progress. Our congratulations to Leah for a job well done!

Changes in RcppSMC version 0.2.0 (2017-08-28)
  • Also use .registration=TRUE in useDynLib in NAMESPACE

  • Multiple Sequential Monte Carlo extensions (Leah South as part of Google Summer of Code 2017)

    • Switching to population level objects (#2 and #3).

    • Using Rcpp attributes (#2).

    • Using automatic RNGscope (#4 and #5).

    • Adding multiple normalising constant estimators (#7).

    • Static Bayesian model example: linear regression (#10 addressing #9).

    • Adding a PMMH example (#13 addressing #11).

    • Framework for additional algorithm parameters and adaptation (#19 addressing #16; also #24 addressing #23).

    • Common adaptation methods for static Bayesian models (#20 addressing #17).

    • Supporting MCMC repeated runs (#21).

    • Adding adaptation to linear regression example (#22 addressing #18).

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Norbert Preining: Gaming: The Long Dark – Wintermute Episode 1

28 August, 2017 - 12:50

One month ago the Story Mode for The Long Dark has been released. Finally I managed to finish the first of the two chapters, and it was worth the wait.

I have played many ours in the Sandbox mode, but while the first days in story mode are like hand-holding kids, on the fifth day you are kicked into a brutal void with lots, and I mean *lots* of wolves being more than eager to devour you.

Targeting new players the four days one just practice basic techniques, fire making, water, food, first aid, collecting materials. Finally one is allowed to leave the crash site of the plane and climbs out into a mountainous area searching for shelter and at the end for Milton, a deserted village. But the moment one reaches a street, wolves are appearing again and again, and the only way is often to run from car to car and hide there, hoping not to freeze to death. Took me several tries to make it to the church and then into town to meet the Grey Mother.

The rest of epsiode one is dedicated to various tasks set by the Grey Mother, and some additional (optional) side quests. And while the first encounter with the wolves was pretty grim, in the later parts I had the feeling that they became a bit more easy to take. This might be related to one of the many patches (8 till now) that were shipped out in this month.

After having finished all the quests, including the side quests (and found the very useful distress pistol), I made my way out of Milton, probably not to be seen again?! And while all the quests were finished, I had the feeling I could spend a bit more time and explore the surroundings, maybe something is still hiding out there. But shouldering the climbing rope and climbing out of Milton leads to a very beautiful last stretch, a canyon lined with waterfalls leading to a cave and to the next episode.

Now I only need more time to play Episode 2.

John Goerzen: The Joy of Exploring: Old Phone Systems, Pizza, and Discovery

28 August, 2017 - 08:54

This story involves boys pretending to be pizza deliverymen using a working automated Strowger telephone exchange demonstrator on display in a museum, which is very old and is, to my knowledge, the only such working exhibit in the world. (Yes, I have video.) But first, a thought on exploration.

There are those that would say that there is nothing left to explore anymore – that the whole earth is mapped, photographed by satellites, and, well, known.

I prefer to look at it a different way: the earth is full of places that billions of people will never see, and probably don’t even know about. Those places may be quiet country creeks, peaceful neighborhoods one block away from major tourist attractions, an MTA museum in Brooklyn, a state park in Arkansas, or a beautiful church in Germany.

Martha is not yet two months old, and last week she and I spent a surprisingly long amount of time just gazing at tree branches — she was mesmerized, and why not, because to her, everything is new.

As I was exploring in Portland two weeks ago, I happened to pick up a nearly-forgotten book by a nearly-forgotten person, Beryl Markham, a woman who was a pilot in Africa about 80 years ago. The passage that I happened to randomly flip to in the bookstore, which really grabbed my attention, was this:

The available aviation maps of Africa in use at that time all bore the cartographer’s scale mark, ‘1/2,000,000’ — one over two million. An inch on the map was about thitry-two miles in the air, as compared to the flying maps of Europe on which one inch represented no more than four air miles.

Moreover, it seemed that the printers of the African maps had a slightly malicious habit of including, in large letters, the names of towns, junctions, and villages which, while most of them did exist in fact, as a group of thatched huts may exist or a water hold, they were usually so inconsequential as completely to escape discovery from the cockpit.

Beyond this, it was even more disconcerting to examine your charts before a proposed flight only to find that in many cases the bulk of the terrain over which you had to fly was bluntly marked: ‘UNSURVEYED’.

It was as if the mapmakers had said, “We are aware that between this spot and that one, there are several hundred thousands of acres, but until you make a forced landing there, we won’t know whether it is mud, desert, or jungle — and the chances are we won’t know then!”

— Beryl Markham, West With the Night

My aviation maps today have no such markings. The continent is covered with radio beacons, the world with GPS, the maps with precise elevations of the ground and everything from skyscrapers to antenna towers.

And yet, despite all we know, the world is still a breathtaking adventure.

Yesterday, the boys and I were going to fly to Abilene, KS, to see a museum (Seelye Mansion). Circumstances were such that we neither flew, nor saw that museum. But we still went to Abilene, and wound up at the Museum of Independent Telephony, a wondrous place for anyone interested in the history of technology. As it is one of those off-the-beaten-path sorts of places, the boys got 2.5 hours to use the hands-on exhibits of real old phones, switchboards, and also the schoolhouse out back. They decided — why not? — to use this historic equipment to pretend to order pizzas.

Jacob and Oliver proceeded to invent all sorts of things to use the phones for: ordering pizza, calling the cops to chase the pizza delivery guys, etc. They were so interested that by 2PM we still hadn’t had lunch and they claimed “we’re not hungry” despite the fact that we were going to get pizza for lunch. And I certainly enjoyed the exhibits on the evolution of telephones, switching (from manual plugboards to automated switchboards), and such.

This place was known – it even has a website, I had been there before, and in fact so had the boys (my parents took them there a couple of years ago). But yesterday, we discovered the Strowger switch had been repaired since the last visit, and that it, in fact, is great for conversations about pizza.

Whether it’s seeing an eclipse, discovering a fascination with tree branches, or historic telephones, a spirit of curiosity and exploration lets a person find fun adventures almost anywhere.

Carl Chenet: The Importance of Choosing the Correct Mastodon Instance

28 August, 2017 - 05:00

Remember, Mastodon is a new decentralized social network, based on a free software which is rapidly gaining users (already there is more than 1.5 million accounts). As I’ve created my account in June, I was a fast addict and I’ve already created several tools for this network, Feed2toot, Remindr and Boost (mostly written in Python).

Now, with all this experience I have to stress out the importance of choosing the correct Mastodon instance.

Some technical reminders on how Mastodon works

First, let’s quickly clarify something about the decentralized part. In Mastodon, decentralization is made through a federation of dedicated servers, called “instances”, each one with a complete independent administration. Your user account is created on one specific instance. You have two choices:

  • Create your own instance. Which requires advanced technical knowledge.
  • Create your user account on a public instance. Which is the easiest and fastest way to start using Mastodon.

You can move your user account from one instance to another, but you have to follow a special procedure which can be quite long, considering your own interest for technical manipulation and the total amount of your followers you’ll have to warn about your change. As such, you’ll have to create another account on a new instance and import three lists: the one with your followers, the one with the accounts you have blocked, and the one with the account you have muted.

From this working process, several technical and human factors will interest us.

A good technical administration for instance

As a social network, Mastodon is truly decentralized, with more than 1.5 million users on more than 2350 existing instances. As such, the most common usage is to create an account on an open instance. To create its own instance is way too difficult for the average user. Yet, using an open instance creates a strong dependence on the technical administrator of the chosen instance.

The technical administrator will have to deal with several obligations to ensure its service continuity, with high-quality hardware and regular back-ups. All of these have a price, either in money and in time.

Regarding the time factor, it would be better to choose an administration team over an individual, as life events can change quite fast everyone’s interests. As such, Framasoft, a French association dedicated to promoting the Free software use, offers its own Mastodon instance named: Framapiaf. The creator of the mastodon project, also offers a quite solid instance, Mastodon.social (see below).

Regarding the money factor, many instance administrators with a large number of users are currently asking for donation via Patreon, as hosting an instance server or renting one cost money.

Mastodon.social, the first instance of the Mastodon network

The Ideological Trend Of Your Instance

If anybody could have guessed the previous technical points since the recent registration explosion on the Mastodon social network, the following point took almost everyone by surprise. Little by little, different instances show their “culture”, their protest action, and their propaganda on this social network.

As the instance administrator has all the powers over its instance, he or she can block the instance of interacting with some other instances, or ban its instance’s users from any interaction with other instances’ users.

With everyone having in mind the main advantages to have federalized instance from, this partial independence of some instances from the federation was a huge surprise. One of the most recent example was when the Unixcorn.xyz instance administrator banned its users from reading Aeris’ account, which was on its own instance. It was a cataclysm with several consequences, which I’ve named the #AerisGate as it shows the different views on moderation and on its reception by various Mastodon users.

If you don’t manage your own instance, when you’ll have to choose the one where to create your account, make sure that the content you plan to toot is within the rules and compatible with the ideology of said instance’s administrator. Yes, I know, it may seem surprising but, as stated above, by entering a public instance you become dependent on someone else’s infrastructure, who may have an ideological way to conceive its Mastodon hosting service. As such, if you’re a nazi, for example, don’t open your Mastodon account on a far-left LGBT instance. Your account wouldn’t stay open for long.

The moderation rules are described in the “about/more” page of the instance, and may contain ideological elements.

To ease the process for newcomers, it is now possible to use a great tool to select what instance should be the best to host your account.

 

Remember that, as stated above, Mastodon is decentralized, and as such there is no central authority which can be reached in case you have a conflict with your instance’ administrator. And nobody can force said administrator to follow its own rules, or not to change them on the fly.

Think Twice Before Creating Your Account

If you want to create an account on an instance you don’t control, you need to check two elements: the availability of the instance hosting service in the long run, often linked to the administrator or the administration group of said instance, and the ideological orientation of your instance. With these two elements checked, you’ll be able to let your Mastodon account growth peacefully, without fearing an outage of your instance, or simple your account blocked one morning because it doesn’t align with your instance’s ideological line.

in Conclusion

To help me get involved in free software and writing articles for this blog, please consider a donation through my Liberapay page, even if it’s only a few cents per week. My contact Bitcoin and Monero are also available on this page.

Follow me on Mastodon

Jonathan McDowell: On my way home from OMGWTFBBQ

28 August, 2017 - 01:42

I started writing this while sitting in Stansted on my way home from the annual UK Debian BBQ. I’m finally home now, after a great weekend catching up with folk. It’s a good social event for a bunch of Debian folk, and I’m very grateful that Steve and Jo continue to make it happen. These days there are also a number of generous companies chipping in towards the cost of food and drink, so thanks also to Codethink and QvarnLabs AB for the food, Collabora and Mythic Beasts for the beer and Chris for the coffee. And Rob for chasing us all for contributions to cover the rest.

I was trying to remember when the first one of these I attended was; trawling through mail logs there was a Cambridge meetup that ended up at Steve’s old place in April 2001, and we’ve consistently had the summer BBQ since 2004, but I’m not clear on what happened in between. Nonetheless it’s become a fixture in the calendar for those of us in the UK (and a number of people from further afield who regularly turn up). We’ve become a bit more sedate, but it’s good to always see a few new faces, drink some good beer (yay Milton), eat a lot and have some good conversations. This year also managed to get me a SheevaPlug so I could investigate #837989 - a bug with OpenOCD not being able to talk to the device. Turned out to be a channel configuration error in the move to new style FTDI support, so I’ve got that fixed locally and pushed the one line fix upstream as well.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้