Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 7 min ago

Bits from Debian: New Debian Developers and Maintainers (May and June 2017)

2 July, 2017 - 19:30

The following contributors got their Debian Developer accounts in the last two months:

  • Alex Muntada (alexm)
  • Ilias Tsitsimpis (iliastsi)
  • Daniel Lenharo de Souza (lenharo)
  • Shih-Yuan Lee (fourdollars)
  • Roger Shimizu (rosh)

The following contributors were added as Debian Maintainers in the last two months:

  • James Valleroy
  • Ryan Tandy
  • Martin Kepplinger
  • Jean Baptiste Favre
  • Ana Cristina Custura
  • Unit 193

Congratulations!

Ritesh Raj Sarraf: apt-offline 1.8.1 released

2 July, 2017 - 18:38

apt-offline 1.8.1 released.

This is a bug fix release fixing some python3 glitches related to module imports. Recommended for all users.

apt-offline (1.8.1) unstable; urgency=medium

  * Switch setuptools to invoke py3
  * No more argparse needed on py3
  * Fix genui.sh based on comments from pyqt mailing list
  * Bump version number to 1.8.1

 -- Ritesh Raj Sarraf <rrs@debian.org>  Sat, 01 Jul 2017 21:39:24 +0545
 

What is apt-offline

Description: offline APT package manager
 apt-offline is an Offline APT Package Manager.
 .
 apt-offline can fully update and upgrade an APT based distribution without
 connecting to the network, all of it transparent to APT.
 .
 apt-offline can be used to generate a signature on a machine (with no network).
 This signature contains all download information required for the APT database
 system. This signature file can be used on another machine connected to the
 internet (which need not be a Debian box and can even be running windows) to
 download the updates.
 The downloaded data will contain all updates in a format understood by APT and
 this data can be used by apt-offline to update the non-networked machine.
 .
 apt-offline can also fetch bug reports and make them available offline.

 

Categories: Keywords: Like: 

Junichi Uekawa: PLC network connection seems to be less reliable.

2 July, 2017 - 08:28
PLC network connection seems to be less reliable. Maybe it's too hot. Reconnecting it seems to make it better. Hmmm..

Thorsten Alteholz: My Debian Activities in June 2017

2 July, 2017 - 00:12

FTP assistant

This month I marked 100 packages for accept and rejected zero packages. I promise, I will reject more in July!

Debian LTS

This was my thirty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload went down again to 16h. During that time I did LTS uploads of:

  • [DLA 994-1] zziplib security update for seven CVEs
  • [DLA 995-1] swftools security update for two CVEs
  • [DLA 998-1] c-ares security update for one CVE
  • [DLA 1008-1] libxml2 security update for five CVEs

I also tested the proposed apache2 package prepared by Roberto and started to work on a new bind9 upload

Last but not least I had five days of frontdesk duties.

Other stuff

This month was filled with updating lots of machines to Stretch. Everything went fine, so thanks a lot to everybody involved in that release.

Nevertheless I also did a DOPOM upload and now take care of otpw. Luckily most of the accumulated bugs already contained a patch, so that I just had to upload a new version and close them.

Daniel Silverstone: F/LOSS activity, June 2017

1 July, 2017 - 19:59

It seems to be becoming popular to send a mail each month detailing your free software work for that month. I have been slowly ramping my F/LOSS activity back up, after years away where I worked on completing my degree. My final exam for that was in June 2017 and as such I am now in a position to try and get on with more F/LOSS work.

My focus, as you might expect, has been on Gitano which reached 1.0 in time for Stretch's release and which is now heading gently toward a 1.1 release which we have timed for Debconf 2017. My friend a colleague Richard has been working hard on Gitano and related components during this time too, and I hope that Debconf will be an opportunity for him to meet many of my Debian friends too. But enough of that, back to the F/LOSS.

We've been running Gitano developer days roughly monthly since March of 2017, and the June developer day was attended by myself, Richard Maw, and Richard Ipsum. You are invited to read the wiki page for the developer day if you want to know exactly what we got up to, but a summary of my involvement that day is:

  • I chaired the review of our current task state for the project
  • I chaired the decision on the 1.1 timeline.
  • I completed a code branch which adds rudimentary hook support to Gitano and submitted it for code review.
  • I began to learn about git-multimail since we have a need to add support for it to Gitano

Other than that, related to Gitano during June I:

  • Reviewed Richard Ipsum's lua-scrypt patches for salt generation
  • Reviewed Richard Maw's work on centralising Gitano's patterns into a module.
  • Responded to reviews of my hook work, though I need to clean it up some before it'll be merged.

My non-Gitano F/LOSS related work in June has been entirely centred around the support I provide to the Lua community in the form of the Lua mailing list and website. The host on which it's run is ailing, and I've had to spend time trying to improve and replace that.

Hopefully I'll have more to say next month. Perhaps by doing this reporting I'll get more F/LOSS done. Of course, July's report will be sent out while I'm in Montréal for debconf 2017 (or at least for debcamp at that point) so hopefully more to say anyway.

Keith Packard: DRM-lease-4

1 July, 2017 - 15:45
DRM leasing part three (vblank)

The last couple of weeks have been consumed by getting frame sequence numbers and events handled within the leasing environment (and Vulkan) correctly.

Vulkan EXT_display_control extension

This little extension provides the bits necessary for applications to track the display of frames to the user.

VkResult
vkGetSwapchainCounterEXT(VkDevice           device,
             VkSwapchainKHR         swapchain,
             VkSurfaceCounterFlagBitsEXT    counter,
             uint64_t           *pCounterValue);

This function just retrieves the current frame count from the display associated with swapchain.

VkResult
vkRegisterDisplayEventEXT(VkDevice          device,
              VkDisplayKHR          display,
              const VkDisplayEventInfoEXT   *pDisplayEventInfo,
              const VkAllocationCallbacks   *pAllocator,
              VkFence           *pFence);

This function creates a fence that will be signaled when the specified event happens. Right now, the only event supported is when the first pixel of the next display refresh cycle leaves the display engine for the display. If you want something fancier (like two frames from now), you get to do that on your own using this basic function.

drmWaitVBlank

drmWaitVBlank is the existing interface for all things sequence related and has three modes (always nice to have one function do three things, I think). It can:

  1. Query the current vblank number
  2. Block until a specified vblank number
  3. Queue an event to be delivered at a specific vblank number

This interface has a few issues:

  • It has been kludged into supporting multiple CRTCs by taking bits from the 'type' parameter to hold a 'pipe' number, which is the index in the kernel into the array of CRTCs.

  • It has a random selection of 'int' and 'long' datatypes in the interface, making it need special helpers for 32-bit apps running on a 64-bit kernel.

  • Times are in microseconds, frame counts are 32 bits. Vulkan does everything in nanoseconds and wants 64-bits of frame counts.

For leases, figuring out the index into the kernel list of crtcs is pretty tricky -- our lease has a subset of those crtcs, so we can't actually compute the global crtc index.

drmCrtcGetSequence
int drmCrtcGetSequence(int fd, uint32_t crtcId,
               uint64_t *sequence, uint64_t *ns);

Here's a simple new function — hand it a crtc ID and it provides the current frame sequence number and the time when that frame started (in nanoseconds).

drmCrtcQueueSequence
int drmCrtcQueueSequence(int fd, uint32_t crtcId,
                 uint32_t flags, uint64_t sequence,
             uint64_t user_data);

struct drm_event_crtc_sequence {
    struct drm_event    base;
    __u64           user_data;
    __u64           time_ns;
    __u64           sequence;
};

This will cause a CRTC_SEQUENCE event to be delivered at the start of the specified frame sequence. That event will include the frame when the event was actually generated (in case it's late), along with the time (in nanoseconds) when that frame was started. The event also includes a 64-bit user_data value, which can be used to hold a pointer to whatever data the application wants to see in the event handler.

The 'flags' argument contains a combination of:

#define DRM_CRTC_SEQUENCE_RELATIVE      0x00000001  /* sequence is relative to current */
#define DRM_CRTC_SEQUENCE_NEXT_ON_MISS      0x00000002  /* Use next sequence if we've missed */
#define DRM_CRTC_SEQUENCE_FIRST_PIXEL_OUT   0x00000004  /* Signal when first pixel is displayed */

These are similar to the values provided for the drmWaitVBlank function, except I've added a selector for when the event should be delivered to align with potential future additions to Vulkan. Right now, the only time you can ask for is first-pixel-out, which says that the event should correspond to the display of the first pixel on the screen.

DRM events → Vulkan fences

With the kernel able to deliver a suitable event at the next frame, all the Vulkan code needed was a to create a fence and hook it up to such an event. The existing fence code only deals with rendering fences, so I added window system interface (WSI) fencing infrastructure and extended the radv driver to be able to handle both kinds of fences within that code.

Multiple waiting threads

I've now got three places which can be waiting for a DRM event to appear:

  1. Frame sequence fences.

  2. Wait for an idle image. Necessary when you want an image to draw the next frame to.

  3. Wait for the previous flip to complete. The kernel can only queue one flip at a time, so we have to make sure the previous flip is complete before queuing another one.

Vulkan allows these to be run from separate threads, so I needed to deal with multiple threads waiting for a specific DRM event at the same time.

XCB has the same problem and goes to great lengths to manage this with a set of locking and signaling primitives so that only one thread is ever doing poll or read from the socket at time. If another thread wants to read at the same time, it will block on a condition variable which is then signaled by the original reader thread at the appropriate time. It's all very complicated, and it didn't work reliably for a number of years.

I decided to punt and just create a separate thread for processing all DRM events. It blocks using poll(2) until some events are readable, processes those and then broadcasts to a condition variable to notify any waiting threads that 'something' has happened. Each waiting thread simply checks for the desired condition and if not satisfied, blocks on that condition variable. It's all very simple looking, and seems to work just fine.

Code Complete, Validation Remains

At this point, all of the necessary pieces are in place for the VR application to take advantage of an HMD using only existing Vulkan extensions. Those will be automatically mapped into DRM leases and DRM events as appropriate.

The VR compositor application is working pretty well; tests with Dota 2 show occasional jerky behavior in complex scenes, so there's clearly more work to be done somewhere. I need to go write a pile of tests to independently verify that my code is working. I wonder if I'll need to wire up some kind of light sensor so I can actually tell when frames get displayed as it's pretty easy to get consistent-but-wrong answers in this environment.

Source Code
  • Linux. This is based off of a reasonably current drm-next branch from Dave Airlie. 965 commits past 4.12 RC3.

    git://people.freedesktop.org/~keithp/linux drm-lease-v3

  • X server (which includes xf86-video-modesetting). This is pretty close to master.

    git://people.freedesktop.org/~keithp/xserver drm-lease

  • RandR protocol changes

    git://people.freedesktop.org/~keithp/randrproto drm-lease

  • xcb proto (no changes to libxcb sources, but it will need to be rebuilt)

    git://people.freedesktop.org/~keithp/xcb/proto drm-lease

  • DRM library. About a dozen patches behind master.

    git://people.freedesktop.org/~keithp/drm drm-lease

  • Mesa. Branched early this month (4 June), this is pretty far from master.

    git://people.freedesktop.org/~keithp/mesa drm-lease

Russ Allbery: End of month haul

1 July, 2017 - 10:44

For some reason, June is always incredibly busy for me. It's historically the most likely month in which I don't post anything here at all except reviews (and sometimes not even that). But I'm going to tell you about what books I bought (or were given to me) on the very last day of the month to break the pattern of no journal postings in June.

Ted Chiang — Arrival (Stories of Your Life) (sff collection)
Eoin Colfer — Artemis Fowl (sff)
Philip K. Dick — The Man Who Japed (sff)
Yoon Ha Lee — Raven Strategem (sff)
Paul K. Longmore — Why I Burned My Book (nonfiction)
Melina Marchetta — The Piper's Son (mainstream)
Jules Verne — For the Flag (sff, sort of)

This is a more than usually eclectic mix.

The Chiang is his Stories of Your Life collection reissued under the title of Arrival to cash in on the huge success of the movie based on one of his short stories. I'm not much of a short fiction reader, but I've heard so many good things about Chiang that I'm going to give it a try.

The Longmore is a set of essays about disability rights that has been on my radar for a while. I finally got pushed into buying it (plus the first Artemis Fowl book and the Marchetta) because I've been reading back through every review Light has written. (I wish I were even close to that amusingly snarky in reviews, and she manages to say in a few paragraphs what usually takes me a whole essay.)

Finally, the Dick and the Verne were gifts from a co-worker from a used book store in Ireland. Minor works by both authors, but nice, old copies of the books.

Russ Allbery: Review: Make It Stick

1 July, 2017 - 10:05

Review: Make It Stick, by Peter C. Brown, et al.

Author: Peter C. Brown Author: Henry L. Roediger III Author: Mark A. McDaniel Publisher: Belknap Press Copyright: 2014 ISBN: 0-674-72901-3 Format: Kindle Pages: 255

Another read for the work book club.

"People generally are going about learning in the wrong ways." This is the first sentence of the preface of this book by two scientists (Roediger and McDaniel are both psychology researchers specializing in memory) and a novelist and former management consultant (Brown). The goal of Make It Stick is to apply empirical scientific research to the problem of learning, specifically retention of information for long-term use. The authors aim to convince the reader that subjective impressions of the effectiveness of study habits are highly deceptive, and that scientific evidence points strongly towards mildly counter-intuitive learning methods that don't feel like they're producing as good of results.

I have such profound mixed feelings about this book.

Let's start with the good. Make It Stick is a book containing actual science. The authors quote the studies, results, and scientific argument at length. There are copious footnotes and an index, as well as recommended reading. And the science is concrete and believable, as is the overlaid interpretation based on cognitive and memory research.

The book's primary argument is that short-term and long-term memory are very different things, that what we're trying to achieve when we say "learning" is based heavily on long-term memory and recall of facts for an extended time after study, and that building this type of recall requires not letting our short-term memory do all the work. We tend towards study patterns that show obvious short-term improvement and that produce an increased feeling of effortless recall of the material, but those study patterns are training short-term memory and mean the knowledge slips away quickly. Choosing learning methods that instead make us struggle a little with what we're learning are significantly better. It's that struggle that leads to committing the material to long-term memory and building good recall pathways for it.

On top of this convincingly-presented foundation, the authors walk through learning methods that feel worse in the moment but have better long-term effects: mixing practice of different related things (different types of solids when doing geometry problems, different pitches in batting practice) and switching types before you've mastered the one you're working on, forcing yourself to interpret and analyze material (such as writing a few paragraphs of summary in your own words) instead of re-reading it, and practicing material at spaced intervals far enough apart that you've forgotten some of the material and have to struggle to recall it. Possibly the most useful insight here (at least for me) was the role of testing in learning, not as just a way of measuring progress, but as a learning tool. Frequent, spaced, cumulative testing forces exactly the type of recall that builds long-term memory. The tests themselves help improve our retention of what we're learning. It's bad news for people like me who were delighted to leave school and not have to take a test again, but viewing tests as a more effective learning tool than re-reading and review (which they are) does cast them in a far more positive light.

This is all solid stuff, and I'm very glad the research underlying this book exists and that I now know about it. But there are some significant problems with its presentation.

The first is that there just isn't much here. The two long paragraphs above summarize nearly all of the useful content of this book. The authors certainly provide more elaboration, and I haven't talked about all of the study methods they mention or some of the useful examples of their application. But 80% of it is there, and the book is intentionally repetitive (because it tries to follow the authors' advice on learning theory). Make It Stick therefore becomes tedious and boring, particularly in the first four chapters. I was saying a lot of "yes, yes, you said that already" and falling asleep while trying to read it. The summaries at the end of the book are a bit better, but you will probably not need most of this book to get the core ideas.

And then there's chapter five, which ends in a train wreck.

Chapter five is on cognitive biases, and I see why the authors wanted to include it. The Dunning-Kruger effect is directly relevant to their topic. It undermines our ability to learn, and is yet another thing that testing helps avoid. Their discussion of Daniel Kahneman's two system theory (your fast, automatic, subconscious reactions and your slow, thoughtful, conscious processing) is somewhat less directly relevant, but it's interesting stuff, and it's at least somewhat related to the short-term and long-term memory dichotomy. But some of the stories they choose to use to illustrate this are... deeply unfortunate. Specifically, the authors decided to use US police work in multiple places as their example of choice for two-system thinking, and treat it completely uncritically.

Some of you are probably already wincing because you can see where this is going.

They interview a cop who, during scenario training for traffic stops, was surprised by the car trunk popping open and a man armed with a shotgun popping out of it. To this day, he still presses down on the trunk of the car as he walks up; it's become part of his checklist for every traffic stop. This would be a good example if the authors realized how badly his training has failed and deconstructed it, but they're apparently oblivious. I wanted to reach into the book and shake them. People have a limited number of things they can track and follow as part of a procedure, and some bad trainer has completely wasted part of this cop's attention in every traffic stop and thereby made him less safe! Just calculate the chances that someone would be curled up in an unlocked trunk with a shotgun and a cop would just happen to stop that car for some random reason, compared to any other threat the cop could use that same attention to watch for. This is exactly the type of scenario that's highly memorable but extremely improbable and therefore badly breaks human risk analysis. It's what Bruce Schneier calls a movie plot threat. The correct reaction to movie plot threats is to ignore them; wasting effort on mitigating them means not having that effort to spend on mitigating some other less memorable but more likely threat.

This isn't the worst, though. The worst is the very next paragraph, also from police training, of showing up at a domestic call, seeing an armed person on the porch who stands up and walks away when ordered to drop their weapon, and not being sure how to react, resulting in that person (in the simulated exercise) killing the cop before they did anything. The authors actually use this as an example of how the cop was using system two and needed to train to use system one in that situation to react faster, and that this is part of the point of the training.

Those of us who have been paying attention to the real world know what using system one here means: the person on the porch gets shot if they're black and doesn't get shot if they're white. The authors studiously refuse to even hint at this problem.

I would have been perfectly happy if this book avoided the unconscious bias aspect of system one thinking. It's a bit far afield of the point of the book, and the authors are doubtless trying to stay apolitical. But that's why you pick some other example. You cannot just drop this kind of thing on the page and then refuse to even comment on it! It's like writing a chapter about the effect of mass transit on economic development, choosing Atlanta as one of your case studies, and then never mentioning race.

Also, some editor seriously should have taken an ax to the sentence where the authors (for no justified reason) elaborate a story to describe a cop maiming a person, solely to make a cliched joke about how masculinity is defined by testicles and how people who lose body parts are less human. Thanks, book.

This was bad enough that it dominated my memory of this chapter, but, reviewing the book for this review, I see it was just a few badly chosen examples at the end of the chapter and one pointless story at the start. The rest of the chapter is okay, although it largely summarizes things covered better in other books. The most useful part that's relevant to the topic of the book is probably the discussion of peer instruction. Just skip over all the police bits; you won't be missing anything.

Thankfully, the rest of the book mostly avoids failing quite this hard. Chapter six does open with the authors obliviously falling for a string of textbook examples of survivorship bias (immediately after the chapter on cognitive biases!), but they shortly thereafter settle down to the accurate and satisfying work of critiquing theories of learning methods and types of intelligence. And by critiquing, I mean pointing out that they're mostly unscientific bullshit, which is fighting the good fight as far as I'm concerned.

So, mixed feelings. The science seems solid, and is practical and directly applicable to my life. Make It Stick does an okay job at presenting it, but gets tedious and boring in places, particularly near the beginning. And there are a few train-wreck examples that had me yelling at the book and scribbling notes, which wasn't really the cure for boredom I was looking for. I recommend being aware of this research, and I'm glad the authors wrote this book, but I can't really recommend the book itself as a reading experience.

Rating: 6 out of 10

Paul Wise: FLOSS Activities June 2017

1 July, 2017 - 09:51
Changes Issues Review Administration
  • Debian: redirect 2 users to support channels, redirect 1 person to the mirrors team, investigate SMTP TLS question, fix ACL issue, restart dead exim4 service
  • Debian mentors: service restarts, security updates & reboot
  • Debian QA: deploy my changes
  • Debian website: release related rebuilds, rebuild installation-guide
  • Debian wiki: whitelist several email addresses, whitelist 1 domain
  • Debian package tracker: deploy my changes
  • Debian derivatives census: deploy my changes
  • Openmoko: security updates & reboots.
Communication Sponsors

All work was done on a volunteer basis.

Chris Lamb: Free software activities in June 2017

1 July, 2017 - 06:29

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds:
    • Support Debian "buster". (commit)
    • Set TRAVIS=true environment variable when running autopkgtests. (#45)
  • Updated the documentation in django-slack, my library to easily post messages to the Slack group-messaging utility to link to Slack's own message formatting documentation. (#66)
  • Added "buster" support to local-debian-mirror, my package to easily maintain and customise a local Debian mirror via the DebConf configuration tool. (commit)
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source. Multiple third-parties then can come to a consensus on whether a build was compromised or not.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Chaired our monthly IRC meeting. (Summary, logs, etc.)
  • Presented at Hong Kong Open Source Conference 2017.
  • Presented at LinuxCon China.
  • Submitted the following patches to fix reproducibility-related toolchain issues within Debian:
    • cracklib2: Ensuring /var/cache/cracklib/src-dicts are reproducible. (#865623)
    • fontconfig: Ensuring the cache files are reproducible. (#864082)
    • nfstrace: Make the PDF footers reproducible. (#865751)
  • Submitted 6 patches to fix specific reproducibility issues in cd-hit, janus, qmidinet, singularity-container, tigervnc & xabacus.
  • Submitted a wishlist request to the TeX mailing list to ensure that PDF files are reproducible even if generated from a difficult path after identifying underlying cause. (Thread)
  • Categorised a large number of packages and issues in the Reproducible Builds notes.git repository.
  • Worked on publishing our weekly reports. (#110, #111, #112 & #113)
  • Updated our website with 13 missing talks (e291180), updated the metadata for some existing talks (650a201) and added OpenEmbedded to the projects page (12dfcf0).

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Add libarchive-cpio-perl with the !nocheck build profile. (01e408e)
  • Add dpkg-dev dependency build profile. (f998bbe)


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list. However, I:

Debian LTS

This month I have been paid to work 16 hours hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 974-1 fixing a command injection vulnerability in picocom, a dumb-terminal emulation program.
  • Issued DLA 972-1 which patches a double-free vulnerability in the openldap LDAP server.
  • Issued DLA 976-1 which corrects a buffer over-read vulnerability in the yodl ("Your Own Document Language") document processor.
  • Issued DLA 985-1 to address a vulnerability in libsndfile (a library for reading/writing audio files) where a specially-crafted AIFF file could result in an out-of-bounds memory read.
  • Issued DLA 990-1 to fix an infinite loop vulnerability in the expat, an XML parsing library.
  • Issued DLA 999-1 for the openvpn VPN server — if clients used a HTTP proxy with NTLM authentication, a man-in-the-middle attacker could cause the client to crash or disclose stack memory that was likely to contain the proxy password.
Uploads
  • bfs (1.0.2-1) — New upstream release, add basic/smoke autopkgtests.
  • installation-birthday (5) — Add some basic autopkgtest smoke tests and correct the Vcs-{Git,Browser} headers.
  • python-django:
    • 1:1.11.2-1 — New upstream minor release & backport an upstream patch to prevent a test failure if the source is not writable. (#816435)
    • 1:1.11.2-2 — Upload to unstable, use !nocheck profile for build dependencies that are only required for tests and various packaging updates.

I also made the following non-maintainer uploads (NMUs):

  • kluppe (0.6.20-1.1) — Fix segmentation fault caused by passing a truncated pointer instead of a GtkType. (#863421)
  • porg (2:0.10-1.1) — Fix broken LD_PRELOAD path for libporg-log.so. (#863495)
  • ganeti-instance-debootstrap (0.16-2.1) — Fix "illegal option for fgrep" error by using "--" to escape the search needle. (#864025)
  • pavuk (0.9.35-6.1) — Fix segmentation fault when opening the "Limitations" window due to pointer truncation in src/gtkmulticol.[ch]. (#863492)
  • timemachine (0.3.3-2.1) — Fix two segmentation faults in src/gtkmeter.c and gtkmeterscale.c caused by passing a truncated pointers using guint instead of a GtkType. (#863420)
  • jackeq (0.5.9-2.1) — Fix another segmentation fault caused by passing a truncated pointer instead of a GtkType. (#863416)
Debian bugs filed
  • debhelper: Don't run dh_installdocs if nodoc is specified in DEB_BUILD_PROFILES? (#865869)
  • python-blessed: Non-determistically FTBFS due to unreliable timing in tests. (#864337)
  • apt: Please print a better error message if zero certificates are loaded from the system CA store. (#866377)
FTP Team

As a Debian FTP assistant I ACCEPTed 16 packages: faceup, golang-github-andybalholm-cascadia, haskell-miniutter, libplack-builder-conditionals-perl, libprelude, lua-argparse, network-manager-l2tp, node-gulp-concat, node-readable-stream, node-stream-assert, node-xterm, pydocstyle, pytest-bdd, python-iso3166, python-zxcvbn & stressant.

Arturo Borrero González: About the OutlawCountry Linux malware

30 June, 2017 - 17:32

Today I noticed the internet buzz about a new alleged Linux malware called OutlawCountry by the CIA, and leaked by Wikileaks.

The malware redirects traffic from the victim to a control server in order to spy or whatever. To redirect this traffic, they use simple Netfilter NAT rules injected in the kernel.

According to many sites commenting on the issue, is seems that there is something wrong with the Linux kernel Netfilter subsystem, but I read the leaked docs, and what they do is to load a custom kernel module in order to be able to load Netfilter NAT table/rules with more priority than the default ones (overriding any config the system may have).

Isn’t that clear? The attacker is loading a custom kernel module as root in your machine. They don’t use Netfilter to break into your system. The problem is not Netfilter, the problem is your whole machine being under their control.

With root control of the machine, they could simply use any mechanism, like kpatch or whatever, to replace your whole running kernel with a new one, with full access to memory, networking, file system et al.

They probably use a rootkit or the like to take over the system.

Michal &#268;iha&#345;: Weblate 2.15

30 June, 2017 - 16:00

Weblate 2.15 has been released today. It is slightly behind schedule what was mostly caused by my vacation. As with 2.14, there are quite a lot of security improvements based on reports we got from HackerOne program and various new features.

Full list of changes:

  • Show more related translations in other translations.
  • Add option to see translations of current unit to other languages.
  • Use 4 plural forms for Lithuanian by default.
  • Fixed upload for monolingual files of different format.
  • Improved error messages on failed authentication.
  • Keep page state when removing word from glossary.
  • Added direct link to edit secondary language translation.
  • Added Perl format quality check.
  • Added support for rejecting reused passwords.
  • Extended toolbar for editing RTL languages.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Daniel Pocock: A FOSScamp by the beach

30 June, 2017 - 15:47

I recently wrote about the great experience many of us had visiting OSCAL in Tirana. Open Labs is doing a great job promoting free, open source software there.

They are now involved in organizing another event at the end of the summer, FOSScamp in Syros, Greece.

Looking beyond the promise of sun and beach, FOSScamp is also just a few weeks ahead of the Outreachy selection deadline so anybody who wants to meet potential candidates in person may find this event helpful.

If anybody wants to discuss the possibilities for involvement in the event then the best place to do that may be on the Open Labs forum topic.

What will tomorrow's leaders look like?

While watching a talk by Joni Baboci, head of Tirana's planning department, I was pleasantly surprised to see this photo of Open Labs board members attending the town hall for the signing of an open data agreement:

It's great to see people finding ways to share the principles of technological freedoms far and wide and it will be interesting to see how this relationship with their town hall grows in the future.

Steve Kemp: Yet more linux security module craziness ..

29 June, 2017 - 04:00

I've recently been looking at linux security modules. My first two experiments helped me learn:

My First module - whitelist_lsm.c

This looked for the presence of an xattr, and if present allowed execution of binaries.

I learned about the Kernel build-system, and how to write a simple LSM.

My second module - hashcheck_lsm.c

This looked for the presence of a "known-good" SHA1 hash xattr, and if it matched the actual hash of the file on-disk allowed execution.

I learned how to hash the contents of a file, from kernel-space.

Both allowed me to learn things, but both were a little pointless. They were not fine-grained enough to allow different things to be done by different users. (i.e. If you allowed "alice" to run "wget" you'd also allow www-data to do the same.)

So, assuming you wanted to do your security job more neatly what would you want? You'd want to allow/deny execution of commands based upon:

  • The user who was invoking them.
  • The path of the binary itself.

So your local users could run "bad" commands, but "www-data" (post-compromise) couldn't.

Obviously you don't want to have to recompile your kernel to change the rules of who can execute what. So you think to yourself "I'll write those rules down in a file". But of course reading a file from kernel-space is tricky. And parsing any list of rules, in a file, from kernel-space would prone to buffer-related problems.

So I had a crazy idea:

  • When a user attempts to execute a program.
  • Call back to user-space to see if that should be permitted.
    • Give the user-space binary the UID of the invoker, and the path to the command they're trying to execute.

Calling userspace? Every time a command is to be executed? Crazy. But it just might work.

One problem I had with this approach is that userspace might not even be available, when you're booting. So I setup a flag to enable this stuff:

# echo 1 >/proc/sys/kernel/can-exec/enabled

Now the kernel will invoke the following on every command:

/sbin/can-exec $UID $PATH

Because the kernel waits for this command to complete - as it reads the exit-code - you cannot execute any child-processes from it as you'd end up in recursive hell, but you can certainly read files, write to syslog, etc. My initial implementionation was as basic as this:

int main( int argc, char *argv[] )
{
...

  // Get the UID + Program
  int uid = atoi( argv[1] );
  char *prg = argv[2];

  // syslog
  openlog ("can-exec", LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL1);
  syslog (LOG_NOTICE, "UID:%d CMD:%s", uid, prg );

  // root can do all.
  if ( uid == 0 )
    return 0;

  // nobody
  if ( uid == 65534 ) {
    if ( ( strcmp( prg , "/bin/sh" ) == 0 ) ||
         ( strcmp( prg , "/usr/bin/id" ) == 0 ) ) {
      fprintf(stderr, "Allowing 'nobody' access to shell/id\n" );
      return 0;
    }
  }

  fprintf(stderr, "Denied\n" );
  return -1;
}

Although the UIDs are hard-code it actually worked! Yay!

I updated the code to convert the UID to a username, then check executables via the file /etc/can-exec/$USERNAME.conf, and this also worked.

I don't expect anybody to actually use this code, but I do think I've reached a point where I can pretend I've written a useful (or non-pointless) LSM at last. That means I can stop.

Yakking: What is Time?

28 June, 2017 - 19:00

Time is a big enough subject that I'd never finish writing about it.

Fortunately this is a Linux technical blog, so I only have to talk about it from that perspective.

In a departure from our usual article series format, we are going to start with an introductory article and follow up starting from basics, introducing new concepts in order of complexity.

What is time?

Time is seconds since the unix epoch.

The Unix epoch is midnight, Thursday, 1st January 1970 in the UTC time zone.

If you know the unix time and how many seconds in each day since then you can use the unix timestamp to work out what time it is now in UTC time.

If you want to know the time in another timezone the naïve thing is to apply a timezone offset based on how far or behind the zone is compared to UTC.

This is non-trivial since time is a political issue, requiring global coordination with hundreds of countries coming in and out of [daylight saving time][] and various other administrative changes, time often changes.

Some times don't exist in some timezones. Samoa skipped a day and changed time zone.

A side-effect of your system clock being in Unix time is that to know local time you need to know where you are.

The talk linked to in https://fosdem.org/2017/schedule/event/python_datetime/ is primarily about time in python, but covers the relationship between time stamps, clock time and time zones.

http://www.creativedeletion.com/2015/01/28/falsehoods-programmers-date-time-zones.html is a good summary of some of the common misconceptions of time zone handling.

Now that we've covered some of the concepts we can discuss in future articles how this is implemented in Linux.

Daniel Pocock: How did the world ever work without Facebook?

28 June, 2017 - 02:29

Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.

It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?

Revolutionaries like Gandi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently.

With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs.

Kettling has never been easier

When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling.

Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home.

You are more likely to win the lottery than make a viral campaign

Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon.

Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad.

It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral.

An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far?

The news media deliberately over-rates social media

News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too?

The tail doesn't wag the dog

One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog.

The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts.

On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible.

Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer?

Do we need this new definition of a Friend?

Facebook is redefining what it means to be a friend.

Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend?

If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world.

If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game.

This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends".

This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time?

Facebook alternatives: the ultimate trap?

Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and identi.ca are some of the more well known examples.

Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole.

To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it.

Share your thoughts

The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.

Reproducible builds folks: Reproducible Builds: week 113 in Stretch cycle

27 June, 2017 - 23:55

Here's what happened in the Reproducible Builds effort between Sunday June 18 and Saturday June 24 2017:

Upcoming and Past events

Our next IRC meeting is scheduled for the 6th of July at 17:00 UTC with this agenda currently:

  1. Introductions
  2. Reproducible Builds Summit update
  3. NMU campaign for buster
  4. Press release: Debian is doing Reproducible Builds for Buster
  5. Reproducible Builds Branding & Logo
  6. should we become an SPI member
  7. Next meeting
  8. Any other business

On June 19th, Chris Lamb presented at LinuxCon China 2017 on Reproducible Builds.

On June 23rd, Vagrant Cascadian held a Reproducible Builds question and answer session at Open Source Bridge.

Reproducible work in other projects

LEDE: firmware-utils and mtd-utils/mkfs.jffs2 now honor SOURCE_DATE_EPOCH.

Toolchain development and fixes

There was discussion on #782654 about packaging bazel for Debian.

Dan Kegel wrote a patch to use ar determinitiscally for Homebrew, a package manager for MacOS.

Dan Kegel worked on using SOURCE_DATE_EPOCH and other reproduciblity fixes in fpm, a multi plattform package builder.

The Fedora Haskell team disabled parallel builds to achieve reproducible builds.

Bernhard M. Wiedemann submitted many patches upstream:

Packages fixed and bugs filed

Patches submitted upstream:

Other patches filed in Debian:

Reviews of unreproducible packages

573 package reviews have been added, 154 have been updated and 9 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (98)
diffoscope development

Version 83 was uploaded to unstable by Chris Lamb. It also moved the previous changes from experimental (to where they were uploaded) to unstable. It included contributions from previous weeks.

You can read about these changes in our previous weeks' posts, or view the changelog directly (raw form).

We plan to maintain a backport of this and future versions in stretch-backports.

Ximin Luo also worked on better html-dir output for very very large diffs such as those for GCC. So far, this includes unreleased work on a PartialString data structure which will form a core part of a new and more intelligent recursive display algorithm.

strip-nondeterminism development

Versions 0.035-1 was uploaded to unstable from experimental by Chris Lamb. It included contributions from:

  • Bernhard M. Wiedemann
    • Add CPIO handler and test case.
  • Chris Lamb
    • Packaging improvements.

Later in the week Mattia Rizzolo uploaded 0.035-2 with some improvements to the autopkgtest and to the general packaging.

We currently don't plan to maintain a backport in stretch-backports like we did for jessie-backports. Please speak up if you think otherwise.

reproducible-website development
  • Chris Lamb:
    • Add OpenEmbedded to projects page after a discussion at LinuxCon China.
    • Update some metadata for existing talks.
    • Add 13 missing talks.
tests.reproducible-builds.org
  • Alexander 'lynxis' Couzens
    • LEDE: do a quick sha256sum before calling diffoscope. The LEDE build consists of 1000 packages, using diffoscope to detect whether two packages are identical takes 3 seconds in average, while calling sha256sum on those small packages takes less than a second, so this reduces the runtime from 3h to 2h (roughly). For Debian package builds this is neglectable, as each build takes several minutes anyway, thus adding 3 seconds to each build doesn't matter much.
    • LEDE/OpenWrt: move toolchain.html creation to remote node, as this is were the toolchain is build.
    • LEDE: remove debugging output for images.
    • LEDE: fixup HTML creation for toolchain, build path, downloaded software and GIT commit used.
  • Mattia 'mapreri' Rizzolo:
    • Debian: introduce Buster.
    • Debian: explain how to migrate from squid3 (in jessie) to squid (in stretch).
  • Holger 'h01ger' Levsen:
    • Debian:
      • Add jenkins jobs to create schroots and configure pbuilder for Buster.
      • Add Buster to README/about jenkins.d.n.
      • Teach jessie and ubuntu 16.04 systems how to debootstrap Buster.
      • Only update indexes and pkg_sets every 30min as the jobs almost run for 15 min now that we test four suites (compared to three before).
      • Create HTML dashboard, live status and dd-list pages less often.
      • (Almost) stop scheduling old packages in stretch, new versions will still be scheduled and tested as usual.
      • Increase scheduling limits, especially for untested, new and depwait packages.
      • Replace Stretch with Buster in the repository comparison page.
      • Only keep build_service logs for a day, not three.
      • Add check for hanging mounts to node health checks.
      • Add check for haveged to node health checks.
      • Disable ntp.service on hosts running in the future, needed on stretch.
      • Install amd64 kernels on all i386 systems. There is a performance issue with i386 kernels, for which a bug should be filed. Installing the amd64 kernel is a sufficient workaround, but it breaks our 32/64 bit kernel variation on i386.
    • LEDE, OpenWrt: Fix up links and split TODO list.
    • Upgrade i386 systems (used for Debian) and pb3+4-amd64 (used for coreboot, LEDE, OpenWrt, NetBSD, Fedora and Arch Linux tests) to Stretch
    • jenkins: use java 8 as required by jenkins >= 2.60.1
Misc.

This week's edition was written by Ximin Luo, Holger Levsen, Bernhard M. Wiedemann, Mattia Rizzolo, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Colin Watson: New address book

27 June, 2017 - 18:57

I’ve had a kludgy mess of electronic address books for most of two decades, and have got rather fed up with it. My stack consisted of:

  • ~/.mutt/aliases, a flat text file consisting of mutt alias commands
  • lbdb configuration to query ~/.mutt/aliases, Debian’s LDAP database, and Canonical’s LDAP database, so that I can search by name with Ctrl-t in mutt when composing a new message
  • Google Contacts, which I used from Android and was completely separate from all of the above

The biggest practical problem with this was that I had the address book that was most convenient for me to add things to (Google Contacts) and the one I used when sending email, and no sensible way to merge them or move things between them. I also wasn’t especially comfortable with having all my contact information in a proprietary web service.

My goals for a replacement address book system were:

  • free software throughout
  • storage under my control
  • single common database
  • minimal manual transcription when consolidating existing databases
  • integration with Android such that I can continue using the same contacts, messaging, etc. apps
  • integration with mutt such that I can continue using the same query interface
  • not having to write my own software, because honestly

I think I have all this now!

New stack

The obvious basic technology to use is CardDAV: it’s fairly complex, admittedly, but lots of software supports it and one of my goals was not having to write my own thing. This meant I needed a CardDAV server, some way to sync the database to and from both Android and the system where I run mutt, and whatever query glue was necessary to get mutt to understand vCards.

There are lots of different alternatives here, and if anything the problem was an embarrassment of choice. In the end I just decided to go for things that looked roughly the right shape for me and tried not to spend too much time in analysis paralysis.

CardDAV server

I went with Xandikos for the server, largely because I know Jelmer and have generally had pretty good experiences with their software, but also because using Git for history of the backend storage seems like something my future self will thank me for.

It isn’t packaged in stretch, but it’s in Debian unstable, so I installed it from there.

Rather than the standalone mode suggested on the web page, I decided to set it up in what felt like a more robust way using WSGI. I installed uwsgi, uwsgi-plugin-python3, and libapache2-mod-proxy-uwsgi, and created the following file in /etc/uwsgi/apps-available/xandikos.ini which I then symlinked into /etc/uwsgi/apps-enabled/xandikos.ini:

[uwsgi]
socket = 127.0.0.1:8801
uid = xandikos
gid = xandikos
umask = 022
master = true
cheaper = 2
processes = 4
plugin = python3
module = xandikos.wsgi:app
env = XANDIKOSPATH=/srv/xandikos/collections

The port number was arbitrary, as was the path. You need to create the xandikos user and group first (adduser --system --group --no-create-home --disabled-login xandikos). I created /srv/xandikos owned by xandikos:xandikos and mode 0700, and I recommend setting a umask as shown above since uwsgi’s default umask is 000 (!). You should also run sudo -u xandikos xandikos -d /srv/xandikos/collections --autocreate and then Ctrl-c it after a short time (I think it would be nicer if there were a way to ask the WSGI wrapper to do this).

For Apache setup, I kept it reasonably simple: I ran a2enmod proxy_uwsgi, used htpasswd to create /etc/apache2/xandikos.passwd with a username and password for myself, added a virtual host in /etc/apache2/sites-available/xandikos.conf, and enabled it with a2ensite xandikos:

<VirtualHost *:443>
        ServerName xandikos.example.org
        ServerAdmin me@example.org

        ErrorLog /var/log/apache2/xandikos-error.log
        TransferLog /var/log/apache2/xandikos-access.log

        <Location />
                ProxyPass "uwsgi://127.0.0.1:8801/"
                AuthType Basic
                AuthName "Xandikos"
                AuthBasicProvider file
                AuthUserFile "/etc/apache2/xandikos.passwd"
                Require valid-user
        </Location>
</VirtualHost>

Then service apache2 reload, set the new virtual host up with Let’s Encrypt, reloaded again, and off we go.

Android integration

I installed DAVdroid from the Play Store: it cost a few pounds, but I was OK with that since it’s GPLv3 and I’m happy to help fund free software. I created two accounts, one for my existing Google Contacts database (and in fact calendaring as well, although I don’t intend to switch over to self-hosting that just yet), and one for the new Xandikos instance. The Google setup was a bit fiddly because I have two-step verification turned on so I had to create an app-specific password. The Xandikos setup was straightforward: base URL, username, password, and done.

Since I didn’t completely trust the new setup yet, I followed what seemed like the most robust option from the DAVdroid contacts syncing documentation, and used the stock contacts app to export my Google Contacts account to a .vcf file and then import that into the appropriate DAVdroid account (which showed up automatically). This seemed straightforward and everything got pushed to Xandikos. There are some weird delays in syncing contacts that I don’t entirely understand, but it all seems to get there in the end.

mutt integration

First off I needed to sync the contacts. (In fact I happen to run mutt on the same system where I run Xandikos at the moment, but I don’t want to rely on that, and going through the CardDAV server means that I don’t have to poke holes for myself using filesystem permissions.) I used vdirsyncer for this. In ~/.vdirsyncer/config:

[general]
status_path = "~/.vdirsyncer/status/"

[pair contacts]
a = "contacts_local"
b = "contacts_remote"
collections = ["from a", "from b"]

[storage contacts_local]
type = "filesystem"
path = "~/.contacts/"
fileext = ".vcf"

[storage contacts_remote]
type = "carddav"
url = "<Xandikos base URL>"
username = "<my username>"
password = "<my password>"

Running vdirsyncer discover and vdirsyncer sync then synced everything into ~/.contacts/. I added an hourly crontab entry to run vdirsyncer -v WARNING sync.

Next, I needed a command-line address book tool based on this. khard looked about right and is in stretch, so I installed that. In ~/.config/khard/khard.conf (this is mostly just the example configuration, but I preferred to sort by first name since not all my contacts have neat first/last names):

[addressbooks]
[[contacts]]
path = ~/.contacts/<UUID of my contacts collection>/

[general]
debug = no
default_action = list
editor = vim
merge_editor = vimdiff

[contact table]
# display names by first or last name: first_name / last_name
display = first_name
# group by address book: yes / no
group_by_addressbook = no
# reverse table ordering: yes / no
reverse = no
# append nicknames to name column: yes / no
show_nicknames = no
# show uid table column: yes / no
show_uids = yes
# sort by first or last name: first_name / last_name
sort = first_name

[vcard]
# extend contacts with your own private objects
# these objects are stored with a leading "X-" before the object name in the vcard files
# every object label may only contain letters, digits and the - character
# example:
#   private_objects = Jabber, Skype, Twitter
private_objects = Jabber, Skype, Twitter
# preferred vcard version: 3.0 / 4.0
preferred_version = 3.0
# Look into source vcf files to speed up search queries: yes / no
search_in_source_files = no
# skip unparsable vcard files: yes / no
skip_unparsable = no

Now khard list shows all my contacts. So far so good. Apparently there are some awkward vCard compatibility issues with creating or modifying contacts from the khard end. I’ve tried adding one address from ~/.mutt/aliases using khard and it seems to at least minimally work for me, but I haven’t explored this very much yet.

I had to install python3-vobject 0.9.4.1-1 from experimental to fix eventable/vobject#39 saving certain vCard files.

Finally, mutt integration. I already had set query_command="lbdbq '%s'" in ~/.muttrc, and I wanted to keep that in place since I still wanted to use LDAP querying as well. I had to write a very small amount of code for this (perhaps I should contribute this to lbdb upstream?), in ~/.lbdb/modules/m_khard:

#! /bin/sh

m_khard_query () {
    khard email --parsable --remove-first-line --search-in-source-files "$1"
}

My full ~/.lbdb/rc now reads as follows (you probably won’t want the LDAP stuff, but I’ve included it here for completeness):

MODULES_PATH="$MODULES_PATH $HOME/.lbdb/modules"
METHODS='m_muttalias m_khard m_ldap'
LDAP_NICKS='debian canonical'
Next steps

I’ve deleted one account from Google Contacts just to make sure that everything still works (e.g. I can still search for it when composing a new message), but I haven’t yet deleted everything. I won’t be adding anything new there though.

I need to push everything from ~/.mutt/aliases into the new system. This is only about 30 contacts so shouldn’t take too long.

Overall this feels like a big improvement! It wasn’t a trivial amount of setup for just me, but it means I have both better usability for myself and more independence from proprietary services, and I think I can add extra users with much less effort if I need to.

Joey Hess: 12 to 24 volt house conversion

27 June, 2017 - 10:44

Upgrading my solar panels involved switching the house from 12 volts to 24 volts. No reasonably priced charge controllers can handle 1 KW of PV at 12 volts.

There might not be a lot of people who need to do this; entirely 12 volt offgrid houses are not super common, and most upgrades these days probably involve rooftop microinverters and so would involve a switch from DC to AC. I did not find a lot of references online for converting a whole house's voltage from 12V to 24V.

To prepare, I first checked that all the fuses and breakers were rated for > 24 volts. (Actually, > 30 volts because it will be 26 volts or so when charging.) Also, I checked for any shady wiring, and verified that all the wires I could see in the attic and wiring closet were reasonably sized (10AWG) and in good shape.

Then I:

  1. Turned off every light, unplugged every plug, pulled every fuse and flipped every breaker.
  2. Rewired the battery bank from 12V to 24V.
  3. Connected the battery bank to the new charge controller.
  4. Engaged the main breaker, and waited for anything strange.
  5. Screwed in one fuse at a time.
lighting

The house used all fluorescent lights, and they have ballasts rated for only 12V. While they work at 24V, they might blow out sooner or overheat. In fact one died this evening, and while it was flickering before, I suspect the 24V did it in. It makes sense to replace them with more efficient LED lights anyway. I found some 12-24V DC LED lights for regular screw-in (edison) light fixtures. Does not seem very common; Amazon only had a few models and they shipped from China.

Also, I ordered a 15 foot long, 300 LED strip light, which runs on 24V DC and has an adhesive backing. Great stuff -- it can be cut to different lengths and stuck anywhere. I installed some underneath the cook stove hood and the kitchen cabinets, which didn't have lights before.

Similar LED strips are used in some desktop lamps. My lamp was 12V only (barely lit at 24V), but I was able to replace its LED strip, upgrading it to 24V and three times as bright.

(Christmas lights are another option; many LED christmas lights run on 24V.)

appliances

My Lenovo laptop's power supply that I use in the house is a vehicle DC-DC converter, and is rated for 12-24V. It seems to be running fine at 26V, did not get warm even when charging the laptop up from empty.

I'm using buck converters to run various USB powered (5V) ARM boxes such as my sheevaplug. They're quarter sized, so fit anywhere, and are very efficient.

My satellite internet receiver is running with a large buck converter, feeding 12V to an inverter, feeding to a 30V DC power supply. That triple conversion is inneficient, but it works for now.

The ceiling fan runs on 24V, and does not seem to run much faster than on 12V. It may be rated for 12-24V. Can't seem to find any info about it.

The radio is a 12V car radio. I used a LM317 to run it on 24V, to avoid the RF interference a buck converter would have produced. This is a very inneficient conversion; half of the power is wasted as heat. But since I can stream internet radio all day now via satellite, I'll not use the FM radio very often.

Fridge... still running on propane for now, but I have an idea for a way to build a cold storage battery that will use excess power from the PV array, and keep a fridge at a constant 34 degrees F. Next home improvement project in the queue.

Benjamin Mako Hill: Learning to Code in One’s Own Language

27 June, 2017 - 08:00

I recently published a paper with Sayamindu Dasgupta that provides evidence in support of the idea that kids can learn to code more quickly when they are programming in their own language.

Millions of young people from around the world are learning to code. Often, during their learning experiences, these youth are using visual block-based programming languages like Scratch, App Inventor, and Code.org Studio. In block-based programming languages, coders manipulate visual, snap-together blocks that represent code constructs instead of textual symbols and commands that are found in more traditional programming languages.

The textual symbols used in nearly all non-block-based programming languages are drawn from English—consider “if” statements and “for” loops for common examples. Keywords in block-based languages, on the other hand, are often translated into different human languages. For example, depending on the language preference of the user, an identical set of computing instructions in Scratch can be represented in many different human languages:

Examples of a short piece of Scratch code shown in four different human languages: English, Italian, Norwegian Bokmål, and German.

Although my research with Sayamindu Dasgupta focuses on learning, both Sayamindu and I worked on local language technologies before coming back to academia. As a result, we were both interested in how the increasing translation of programming languages might be making it easier for non-English speaking kids to learn to code.

After all, a large body of education research has shown that early-stage education is more effective when instruction is in the language that the learner speaks at home. Based on this research, we hypothesized that children learning to code with block-based programming languages translated to their mother-tongues will have better learning outcomes than children using the blocks in English.

We sought to test this hypothesis in Scratch, an informal learning community built around a block-based programming language. We were helped by the fact that Scratch is translated into many languages and has a large number of learners from around the world.

To measure learning, we built on some of our our own previous work and looked at learners’ cumulative block repertoires—similar to a code vocabulary. By observing a learner’s cumulative block repertoire over time, we can measure how quickly their code vocabulary is growing.

Using this data, we compared the rate of growth of cumulative block repertoire between learners from non-English speaking countries using Scratch in English to learners from the same countries using Scratch in their local language. To identify non-English speakers, we considered Scratch users who reported themselves as coming from five primarily non-English speaking countries: Portugal, Italy, Brazil, Germany, and Norway. We chose these five countries because they each have one very widely spoken language that is not English and because Scratch is almost fully translated into that language.

Even after controlling for a number of factors like social engagement on the Scratch website, user productivity, and time spent on projects, we found that learners from these countries who use Scratch in their local language have a higher rate of cumulative block repertoire growth than their counterparts using Scratch in English. This faster growth was despite having a lower initial block repertoire. The graph below visualizes our results for two “prototypical” learners who start with the same initial block repertoire: one learner who uses the English interface, and a second learner who uses their native language.

Summary of the results of our model for two prototypical individuals.

Our results are in line with what theories of education have to say about learning in one’s own language. Our findings also represent good news for designers of block-based programming languages who have spent considerable amounts of effort in making their programming languages translatable. It’s also good news for the volunteers who have spent many hours translating blocks and user interfaces.

Although we find support for our hypothesis, we should stress that our findings are both limited and incomplete. For example, because we focus on estimating the differences between Scratch learners, our comparisons are between kids who all managed to successfully use Scratch. Before Scratch was translated, kids with little working knowledge of English or the Latin script might not have been able to use Scratch at all. Because of translation, many of these children are now able to learn to code.

This blog post and the work that it describes is a collaborative project with Sayamindu Dasgupta. Sayamindu also published a very similar version of the blog post in several places. Our paper is open access and you can read it here. The paper was published in the proceedings of the ACM Learning @ Scale Conference. We also recently gave a talk about this work at the International Communication Association’s annual conference. We received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Nathan TeBlunthuis at the University of Washington. Financial support came from the US National Science Foundation.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้