Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 52 min ago

Ritesh Raj Sarraf: apt-offline 1.8.0 releasedI

22 May, 2017 - 04:17

I am pleased to announce the release of apt-offline, version 1.8.0. This release is mainly a forward port of apt-offline to Python 3 and PyQt5. There are some glitches related to Python 3 and PyQt5, but overall the CLI interface works fine. Other than the porting, there's also an important bug fixed, related to memory leak when using the MIME library. And then there's some updates to the documentation (user examples) based on feedback from users.

Release is availabe from Github and Alioth

 

 

 

What is apt-offline ?

Description: offline APT package manager
apt-offline is an Offline APT Package Manager.
.
apt-offline can fully update and upgrade an APT based distribution without
connecting to the network, all of it transparent to APT.
.
apt-offline can be used to generate a signature on a machine (with no network).
This signature contains all download information required for the APT database
system. This signature file can be used on another machine connected to the
internet (which need not be a Debian box and can even be running windows) to
download the updates.
The downloaded data will contain all updates in a format understood by APT and
this data can be used by apt-offline to update the non-networked machine.
.
apt-offline can also fetch bug reports and make them available offline.

Categories: Keywords: Like: 

Holger Levsen: 20170521-this-time-of-the-year

22 May, 2017 - 01:26
It's this time of the year again…

So it seems summer has finally arrived here and for the first time this year I've been offline for more than 24h, even despite having wireless network coverage. The lake, the people, the bonfire, the music, the mosquitos and the fireworks at 3.30 in the morning were totally worth it!

Russ Allbery: Review: Sector General

22 May, 2017 - 00:21

Review: Sector General, by James White

Series: Sector General #5 Publisher: Orb Copyright: 1983 Printing: 2002 ISBN: 0-312-87770-6 Format: Trade paperback Pages: 187

Sector General is the fifth book (or, probably more accurately, collection) in the Sector General series. I blame the original publishers for the confusion. The publication information is for the Alien Emergencies omnibus, which includes the fourth through the sixth books in the series.

Looking back on my previous reviews of this series (wow, it's been eight years since I read the last one?), I see I was reviewing them as novels rather than as short story collections. In retrospect, that was a mistake, since they're composed of clearly stand-alone stories with a very loose arc. I'm not going to go back and re-read the earlier collections to give them proper per-story reviews, but may as well do this properly here.

Overall, this collection is more of the same, so if that's what you want, there won't be any negative surprises. It's another four engineer-with-a-wrench stories about biological and medical puzzles, with only a tiny bit of characterization and little hint to any personal life for any of the characters outside of the job. Some stories are forgettable, but White does create some memorable aliens. Sadly, the stories don't take us to the point of real communication, so those aliens stop at biological puzzles and guesswork. "Combined Operation" is probably the best, although "Accident" is the most philosophical and an interesting look at the founding principle of Sector General.

"Accident": MacEwan and Grawlya-Ki are human and alien brought together by a tragic war, and forever linked by a rather bizarre war monument. (It's a very neat SF concept, although the implications and undiscussed consequences don't bear thinking about too deeply.) The result of that war was a general recognition that such things should not be allowed to happen again, and it brought about a new, deep commitment to inter-species tolerance and politeness. Which is, in a rather fascinating philosophical twist, exactly what MacEwan and Grawlya-Ki are fighting against: not the lack of aggression, which they completely agree with, but with the layers of politeness that result in every species treating all others as if they were eggshells. Their conviction is that this cannot create a lasting peace.

This insight is one of the most profound bits I've read in the Sector General novels and supports quite a lot of philosophical debate. (Sadly, there isn't a lot of that in the story itself.) The backdrop against which it plays out is an accidental crash in a spaceport facility, creating a dangerous and potentially deadly environment for a variety of aliens. Given the collection in which this is included and the philosophical bent described above, you can probably guess where this goes, although I'll leave it unspoiled if you can't. It's an idea that could have been presented with more subtlety, but it's a really great piece of setting background that makes the whole series snap into focus. A much better story in context than its surface plot. (7)

"Survivor": The hospital ship Rhabwar rescues a sole survivor from the wreck of an alien ship caused by incomplete safeguards on hyperdrive generators. The alien is very badly injured and unconscious and needs the full attention of Sector General, but on the way back, the empath Prilicla also begins suffering from empathic hypersensitivity. Conway, the protagonist of most of this series, devotes most of his attention to that problem, having delivered the rescued alien to competent surgical hands. But it will surprise no regular reader that the problems turn out to be linked (making it a bit improbable that it takes the doctors so long to figure that out). A very typical entry in the series. (6)

"Investigation": Another very typical entry, although this time the crashed spaceship is on a planet. The scattered, unconscious bodies of the survivors, plus signs of starvation and recent amputation on all of them, convinces the military (well, police is probably more accurate) escort that this is may be a crime scene. The doctors are unconvinced, but cautious, and local sand storms and mobile vegetation add to the threat. I thought this alien design was a bit less interesting (and a lot creepier). (6)

"Combined Operation": The best (and longest) story of this collection. Another crashed alien spacecraft, but this time it's huge, large enough (and, as they quickly realize, of a design) to indicate a space station rather than a ship, except that it's in the middle of nowhere and each segment contains a giant alien worm creature. Here, piecing together the biology and the nature of the vehicle is only the beginning; the conclusion points to an even larger problem, one that requires drawing on rather significant resources to solve. (On a deadline, of course, to add some drama.) This story requires the doctors to go unusually deep into the biology and extrapolated culture of the alien they're attempting to rescue, which made it more intellectually satisfying for me. (7)

Followed by Star Healer.

Rating: 6 out of 10

Adnan Hodzic: Automagically deploy & run containerized WordPress (PHP7 FPM, Nginx, MariaDB) using Ansible + Docker on AWS

21 May, 2017 - 23:28

In this blog post, I’ve described what started as simple migration of WordPress blog to AWS, ended up as automation project consisting of publishing multiple Ansible roles deploying and running multiple Docker images.

If you’re not interested in reading about my entire journey, cognition gains and how this process came to be, please skim down to “Birth of: containerized-wordpress-project (TL;DR)” section.

Migrating WordPress blog to AWS (EC2, Lightsail?)

Since I’ve been sold on Amazon’s AWS idea of cloud computing “services” for couple of years now. I’ve wanted, and been trying to migrate this (WordPress) blog to AWS, but somehow it never worked out.

Moving it to EC2 instance, with its own ELB volumes, AMI, EIP, Security Group … it just seemed as an overkill.

When AWS Lightsail was first released, it seemed that was an answer to all my problems.

But it wasn’t, disregarding its bit restrictive/dumbed down versions of original features. Living in Amsterdam, my main problem with it was that it was only available in a single US region.

Regardless, I thought it had everything I needed for WordPress site, and as a new service, it had great potential.

Its regional limitations were also good in a sense that they made me realize one important thing. And that’s once I migrate my blog to AWS, I want to be able to seamlessly move/migrate it across different EC2’s and different regions once they were available.

If done properly, it meant I could even have it moved across different clouds (I’m talking to you Google Cloud).

P.S: AWS Lightsail is now available in couple of different regions across Europe. Rollout which was almost smoothless.

Fundamental problem of every migration … is migration

Phase 1: Don’t reinvent the wheel?

When you have a WordPress site that’s not self hosted. You want everything to work, but yet you really don’t want to spend any time managing infrastructure it’s on.

And as soon as I started looking what could fit this criteria, I found that there were pre-configured, running out of box WordPress EC2 images available on AWS Marketplace, great!

But when I took a look, although everything was running out of box, I wasn’t happy with software stack it was all built on. Namely Ubuntu 14.04 and Apache, and all of the services were started using custom scripts. Yuck.

With this setup, when it was time to upgrade (and it’s already that time) you wouldn’t be thinking about upgrade. You’d only be thinking about another migration.

Phase 2: What if I built everything myself?

Installing and configuring everything manually, and then writing huge HowTo which I would follow when I needed to re-create whole stack was not an option. Same case with was scripting whole process, as overhead of changes that had to be tracked was too way too big.

Being a huge Ansible fan, automating this step was a natural next step.

I even found an awesome Ansible role which seemed like it’s going to do everything I need. Except, I realized I needed to update all software that’s deployed with it, and customize it since configuration it was deployed on wasn’t as generic.

So I forked it and got to work. But soon enough, I was knee deep in making and fiddling with various system changes. Something I was trying to get away in this case, and most importantly something I was trying to avoid when it was time for next update.

Phase 3: Marriage made in heaven: Ansible + Docker + AWS

Idea to have everything Dockerized was around from very start. However, it never made a lot of sense until I put Ansible into same picture. And it was at this point where my final idea and requirements become crystal clear.

Use Ansible to configure and setup host ready for Docker ecosystem. Ecosystem consisting of multiple separate containers for each required service (WordPress + Nginx + MariaDB). Link them all together as a single service using Docker Compose.

Idea was backed by thought to spend minimum to no time (and effort) on manual configuration of anything on the server. Level of attachment to this server was so low that I didn’t even want to SSH to it.

If there was something wrong, I could just nuke the whole thing and deploy code on a new healthy rolled out server with everything working out of box.

After it was clear what needed to be done, I got to work.

Birth of: containerized-wordpress-project (TL;DR)

After a lot of work, end result is project which allows you to automagically deploy & run containerized WordPress instance which consists of 3 separate containers running:

  • WordPress (PHP7 FPM)
  • Nginx
  • MariaDB

Once run, containerized-wordpress playbook will guide you through interactive setup of all 3 containers, after which it will run all  Ansible roles created for this project. End result is that host you have never even SSH-ed to will be fully configured and running containerized WordPress instace out of box.

Most importantly, this whole process will be completed in <= 5 minutes and doesn’t require any Docker or Ansible knowledge!

containerized-wordpress demo

Console output of running “containerized-wordpress” Ansible Playbook:

Accessing WordPress instance created from “containerized-wordpress” Ansible Playbook:

Did I end up migrating to AWS in the end?

You bet. Thanks to efforts made in containerized-wordpress-project, I’m happy to report my whole WordPress migration to AWS was completed in matter of minutes and that this blog is now running on Docker and on AWS!

I hope this same project will help you take a leap in your migration.

Happy hacking!

Elena 'valhalla' Grandi: Modern XMPP Server

21 May, 2017 - 18:30
Modern XMPP Server

I've published a new HOWTO on my website http://www.trueelena.org/computers/howto/modern_xmpp_server.html:

http://www.enricozini.org/blog/2017/debian/modern-and-secure-instant-messaging/ already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.

How

I've decided to install https://prosody.im/, mostly because it was recommended by the RTC QuickStart Guide http://rtcquickstart.org/; I've heard that similar results can be reached with https://www.ejabberd.im/ and other servers.

I'm also targeting https://www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).

Installation and prerequisites

You will need to enable the https://backports.debian.org/ repository and then install the packages prosody and prosody-modules.

You also need to setup some TLS certificates (I used Let's Encrypt https://letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide http://rtcquickstart.org/guide/multi/xmpp-server-prosody.html for more details.

On your firewall, you'll need to open the following TCP ports:


  • 5222 (client2server)

  • 5269 (server2server)

  • 5280 (default http port for prosody)

  • 5281 (default https port for prosody)



The latter two are needed to enable some services provided via http(s), including rich media transfers.

With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:

prosodyctl adduser alice@example.org

In-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim https://en.wikipedia.org/wiki/Messaging_spam).

prosody configuration

You can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.

First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:


c2s_require_encryption = true
s2s_secure_auth = true


and then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:


s2s_insecure_domains = { "gmail.com" }


virtualhosts

For each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:


VirtualHost "chat.example.org"
enabled = true
ssl = {
key = "/etc/ssl/private/example.org-key.pem";
certificate = "/etc/ssl/public/example.org.pem";
}

For the domains where you also want to enable MUCs, add the follwing lines:


Component "conference.chat.example.org" "muc"
restrict_room_creation = "local"

the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.

You can also add the following line to enable rich media transfers via http uploads (XEP-0363):


Component "upload.chat.example.org" "http_upload"

The defaults are pretty sane, but see https://modules.prosody.im/mod_http_upload.html for details on what knobs you can configure for this module

Don't forget to enable the virtualhost by linking the file inside /etc/prosody/conf.d/.

additional modules

Most of the other interesting XEPs are enabled by loading additional modules inside /etc/prosody/prosody.cfg.lua (under modules_enabled); to enable mod_something just add a line like:


"something";

Most of these come from the prosody-modules package (and thus from https://modules.prosody.im/ ) and some may require changing when prosody 0.10 will be available; when this is the case it is mentioned below.



  • mod_carbons (XEP-0280)
    To keep conversations syncronized while using multiple devices at the same time.

    This will be included by default in prosody 0.10.



  • mod_privacy + mod_blocking (XEP-0191)
    To allow user-controlled blocking of users, including as an anti-spim measure.

    In prosody 0.10 these two modules will be replaced by mod_privacy.



  • mod_smacks (XEP-0198)
    Allow clients to resume a disconnected session before a customizable timeout and prevent message loss.



  • mod_mam (XEP-0313)
    Archive messages on the server for a limited period of time (default 1 week) and allow clients to retrieve them; this is required to syncronize message history between multiple clients.

    With prosody 0.9 only an in-memory storage backend is available, which may make this module problematic on servers with many users. prosody 0.10 will fix this by adding support for an SQL backed storage with archiving capabilities.



  • mod_throttle_presence + mod_filter_chatstates (XEP-0352)
    Filter out presence updates and chat states when the client announces (via Client State Indication) that the user isn't looking. This is useful to reduce power and bandwidth usage for "useless" traffic.




@Gruppo Linux Como @LIFO

Neil Williams: Software, service, data and freedom

20 May, 2017 - 14:24
Free software, free services but what about your data?

I care a lot about free software, not only as a Debian Developer. The use of software as a service matters as well because my principle free software development is on just such a project, licensed under the GNU Affero General Public License version 3. The AGPL helps by allowing anyone who is suitably skilled to install their own copy of the software and run their own service on their own hardware. As a project, we are seeing increasing numbers of groups doing exactly this and these groups are actively contributing back to the project.

So what is the problem? We've got an active project, an active community and everything is under a free software licence and regularly uploaded to Debian main. We have open code review with anonymous access to our own source code CI and anonymous access to project planning, open mailing list archives as well as an open bug tracker and a very active IRC channel (#linaro-lava on OFTC). We develop in the open, we respond in the open and we publish frequently (monthly, approximately). The code we write defaults to public visibilty at runtime with restrictions available for certain use cases.

What else can we be doing? Well it was a simple question which started me thinking.

The lava documentation has various example test scripts e.g. https://validation.linaro.org/static/docs/v2/examples/test-jobs/qemu-kernel-standard-sid.yaml

these have no licence information, we've adapted them for a Linux Foundation project, what licence should apply to these files?

Robert Marshall

Those are our own examples, contributed as part of the documentation and covered by the AGPL like the rest of the documentation and the software which it documents, so I replied with the same. However, what about all the other submissions received by the service?

Data Freedom

LAVA acts by providing a service to authenticated users. The software runs your test code on hardware which might not be available to the user or which is simply inconvenient for the test writer to setup themselves. The AGPL covers this nicely.

What about the data contributed by the users? We make this available to other users who will, naturally, copy and paste for their own tests. In most cases, because the software defaults to public access, anonymous users also get to learn from the contributions of other test writers. This is a good thing and to be encouraged. (One reason why we moved to YAML for all submissions was to allow comments to help other users understand why the submission does specific things.)

Writing a test job submission or a test shell definition from scratch is a non-trivial amount of work. We've written dozens of pages of documentation covering how and how not to do it but the detail of how a test job runs exactly what the test writer requires can involve substantial effort. (Our documentation recommends using version control for each of these works for exactly these reasons.)

At what point do these works become software? At what point do these need licensing? How could that be declared?

Perils of the Javascript Trap approach

When reading up on the AGPL, I also read about Service as a Software Substitute (SaaSS) and this led to The Javascript Trap.

I don't consider LAVA to be SaaSS although it is Software as a Service (SaaS). (Distinguishing between those is best left to the GNU document as it is an almighty tangle at times.)

I did look at the GNU ideas for licensing Javascript but it seems cumbersome and unnecessary - a protocol designed for the specific purposes of their own service rather than as a solution which could be readily adopted by all such services.

The same problems affect trying to untangle sharing the test job data within LAVA.

Adding Licence text

The traditional way, of course, is simply to add twenty lines or so of comments at the top of every file. This works nicely for source code because the comments are hidden from the final UI (unless an explicit reference is made in the --help output or similar). It is less nice for human readable submissions where the first thing someone has to do is scroll passed the comments to get to what they want to see. At that point, it starts to look like a popup or a nagging banner - blocking the requested content on a website to try and get the viewer to subscribe to a newsletter or pay for the rest of the content. Let's not actively annoy visitors who are trying to get things done.

Adding Licence files

This can be done in the remote version control repository - then a single line in the submitted file can point at the licence. This is how I'm seeking to solve the problem of our own repositories. If the reference URL is included in the metadata of the test job submission, it can even be linked into the test job metadata and made available to everyone through the results API.

metadata:
  licence.text: http://mysite/lava/git/COPYING
  licence.name: BSD 3 clause

Metadata in LAVA test job submissions is free-form but if the example was adopted as a convention for LAVA submissions, it would make it easy for someone to query LAVA for the licences of a range of test submissions.

Currently, LAVA does not store metadata from the test shell definitions except the URL of the git repo for the test shell definition but that may be enough in most cases for someone to find the relevant COPYING or LICENCE file.

Which licence?

This could be a problem too. If users contribute data under unfriendly licences, what is LAVA to do? I've used the BSD 3 clause in the above example as I expect it to be the most commonly used licence for these contributions. A copyleft licence could be used, although doing so would require additional metadata in the submission to declare how to contribute back to the original author (because that is usually not a member of the LAVA project).

Why not Creative Commons?

Although I'm referring to these contributions as data, these are not pieces of prose or images or audio. These are instructions (with comments) for a specific piece of software to execute on behalf of the user. As such, these objects must comply with the schema and syntax of the receiving service, so a code-based licence would seem correct.

Results

Finally, a word about what comes back from your data submission - the results. This data cannot be restricted by any licence affecting either the submission or the software, it can be restricted using the API or left as the default of public access.

If the results and the submission data really are private, then the solution is to take advantage of the AGPL, take the source code of LAVA and run it internally where the entire service can be placed within a firewall.

What happens next?
  1. Please consider editing your own LAVA test job submissions to add licence metadata.
  2. Please use comments in your own LAVA test job submissions, especially if you are using some form of template engine to generate the submission. This data will be used by others, it is easier for everyone if those users do not have to ask us or you about why your test job does what it does.
  3. Add a file to your own repositories containing LAVA test shell definitions to declare how these files can be shared freely.
  4. Think about other services to which you submit data which is either only partially machine generated or which is entirely human created. Is that data free-form or are you essentially asking the service to do a precise task on your behalf as if you were programming that server directly? (Jenkins is a classic example, closely related to LAVA.)
    • Think about how much developer time was required to create that submission and how the service publishes that submission in ways that allow others to copy and paste it into their own submissions.
    • Some of those submissions can easily end up in documentation or other published sources which will need to know about how to licence and distribute that data in a new format (i.e. modification.) Do you intend for that useful purpose to be defeated by releasing your data under All Rights Reserved?
Contact

I don't enable comments on this blog but there are enough ways to contact me and the LAVA project in the body of this post, it really shouldn't be a problem for anyone to comment.

Ritesh Raj Sarraf: Patanjali Research Foundation

20 May, 2017 - 11:46

PSA: Research in the domain of Ayurveda

http://www.patanjaliresearchfoundation.com/patanjali/
 

I am so glad to see this initiative taken by the Patanjali group. This is a great stepping stone in the health and wellness domain.

So far, Allopathy has been blunt in discarding alternate medicine practices, without much solid justification. The only, repetitive, response I've heard is "lack of research". This initiative definitely is a great step in that regard.

Ayurveda (Ancient Hindu art of healing) has a huge potential to touch lives. For the Indian sub-continent, this has the potential of a blessing.

The Prime Minister of India himself inaugurated the research centre.

Categories: Keywords: Like: 

Clint Adams: Help the Aged

19 May, 2017 - 23:10

I keep meeting girls from Walnut Creek who don’t know about the CDROM.

Posted on 2017-05-19 Tags: ranticore

Michael Prokop: Debian stretch: changes in util-linux #newinstretch

19 May, 2017 - 15:42

We’re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it’s time for #newinstretch!

Hideki Yamane already started the game by blogging about GitHub’s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch.

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available.

Tools that have been taken over from other packages
  • last: used to be shipped via sysvinit-utils in Debian/jessie
  • lastb: used to be shipped via sysvinit-utils in Debian/jessie
  • mountpoint: used to be shipped via initscripts in Debian/jessie
  • sulogin: used to be shipped via sysvinit-utils in Debian/jessie
New tools
  • lsipc: show information on IPC facilities, e.g.:
  • root@ff2713f55b36:/# lsipc
    RESOURCE DESCRIPTION                                              LIMIT USED  USE%
    MSGMNI   Number of message queues                                 32000    0 0.00%
    MSGMAX   Max size of message (bytes)                               8192    -     -
    MSGMNB   Default max size of queue (bytes)                        16384    -     -
    SHMMNI   Shared memory segments                                    4096    0 0.00%
    SHMALL   Shared memory pages                       18446744073692774399    0 0.00%
    SHMMAX   Max size of shared memory segment (bytes) 18446744073692774399    -     -
    SHMMIN   Min size of shared memory segment (bytes)                    1    -     -
    SEMMNI   Number of semaphore identifiers                          32000    0 0.00%
    SEMMNS   Total number of semaphores                          1024000000    0 0.00%
    SEMMSL   Max semaphores per semaphore set.                        32000    -     -
    SEMOPM   Max number of operations per semop(2)                      500    -     -
    SEMVMX   Semaphore max value                                      32767    -     -
    
  • lslogins: display information about known users in the system, e.g.:
  • root@ff2713f55b36:/# lslogins 
      UID USER     PROC PWD-LOCK PWD-DENY LAST-LOGIN GECOS
        0 root        2        0        1            root
        1 daemon      0        0        1            daemon
        2 bin         0        0        1            bin
        3 sys         0        0        1            sys
        4 sync        0        0        1            sync
        5 games       0        0        1            games
        6 man         0        0        1            man
        7 lp          0        0        1            lp
        8 mail        0        0        1            mail
        9 news        0        0        1            news
       10 uucp        0        0        1            uucp
       13 proxy       0        0        1            proxy
       33 www-data    0        0        1            www-data
       34 backup      0        0        1            backup
       38 list        0        0        1            Mailing List Manager
       39 irc         0        0        1            ircd
       41 gnats       0        0        1            Gnats Bug-Reporting System (admin)
      100 _apt        0        0        1            
    65534 nobody      0        0        1            nobody
    
  • lsns: list system namespaces, e.g.:
  • root@ff2713f55b36:/# lsns
            NS TYPE   NPROCS PID USER COMMAND
    4026531835 cgroup      2   1 root bash
    4026531837 user        2   1 root bash
    4026532473 mnt         2   1 root bash
    4026532474 uts         2   1 root bash
    4026532475 ipc         2   1 root bash
    4026532476 pid         2   1 root bash
    4026532478 net         2   1 root bash
    
  • mesg: control write access of other users to your terminal
  • setpriv: run a program with different privilege settings
  • zramctl: tool to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices
New features/options

addpart (show or change the real-time scheduling attributes of a process):

--reload reload prompts on running agetty instances

blkdiscard (discard the content of sectors on a device):

-p, --step <num>    size of the discard iterations within the offset
-z, --zeroout       zero-fill rather than discard

chrt (show or change the real-time scheduling attributes of a process):

-d, --deadline            set policy to SCHED_DEADLINE
-T, --sched-runtime <ns>  runtime parameter for DEADLINE
-P, --sched-period <ns>   period parameter for DEADLINE
-D, --sched-deadline <ns> deadline parameter for DEADLINE

fdformat (do a low-level formatting of a floppy disk):

-f, --from <N>    start at the track N (default 0)
-t, --to <N>      stop at the track N
-r, --repair <N>  try to repair tracks failed during the verification (max N retries)

fdisk (display or manipulate a disk partition table):

-B, --protect-boot            don't erase bootbits when creating a new label
-o, --output <list>           output columns
    --bytes                   print SIZE in bytes rather than in human readable format
-w, --wipe <mode>             wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>  wipe signatures from new partitions (auto, always or never)

New available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

findmnt (find a (mounted) filesystem):

-J, --json             use JSON output format
-M, --mountpoint <dir> the mountpoint directory
-x, --verify           verify mount table content (default is fstab)
    --verbose          print more details

flock (manage file locks from shell scripts):

-F, --no-fork            execute command without forking
    --verbose            increase verbosity

getty (open a terminal and set its mode):

--reload               reload prompts on running agetty instances

hwclock (query or set the hardware clock):

--get            read hardware clock and print drift corrected result
--update-drift   update drift factor in /etc/adjtime (requires --set or --systohc)

ldattach (attach a line discipline to a serial line):

-c, --intro-command <string>  intro sent before ldattach
-p, --pause <seconds>         pause between intro and ldattach

logger (enter messages into the system log):

-e, --skip-empty         do not log empty lines when processing files
    --no-act             do everything except the write the log
    --octet-count        use rfc6587 octet counting
-S, --size <size>        maximum size for a single message
    --rfc3164            use the obsolete BSD syslog protocol
    --rfc5424[=<snip>]   use the syslog protocol (the default for remote);
                           <snip> can be notime, or notq, and/or nohost
    --sd-id <id>         rfc5424 structured data ID
    --sd-param <data>    rfc5424 structured data name=value
    --msgid <msgid>      set rfc5424 message id field
    --socket-errors[=<on|off|auto>] print connection errors when using Unix sockets

losetup (set up and control loop devices):

-L, --nooverlap               avoid possible conflict between devices
    --direct-io[=<on|off>]    open backing file with O_DIRECT 
-J, --json                    use JSON --list output format

New available --list column:

DIO  access backing file with direct-io

lsblk (list information about block devices):

-J, --json           use JSON output format

New available columns (for --output):

HOTPLUG  removable or hotplug device (usb, pcmcia, ...)
SUBSYSTEMS  de-duplicated chain of subsystems

lscpu (display information about the CPU architecture):

-y, --physical          print physical instead of logical IDs

New available column:

DRAWER  logical drawer number

lslocks (list local system locks):

-J, --json             use JSON output format
-i, --noinaccessible   ignore locks without read permissions

nsenter (run a program with namespaces of other processes):

-C, --cgroup[=<file>]      enter cgroup namespace
    --preserve-credentials do not touch uids or gids
-Z, --follow-context       set SELinux context according to --target PID

rtcwake (enter a system sleep state until a specified wakeup time):

--date <timestamp>   date time of timestamp to wake
--list-modes         list available modes
-r, --reorder <dev>  fix partitions order (by start offset)

sfdisk (display or manipulate a disk partition table):

New Commands:

-J, --json <dev>                  dump partition table in JSON format
-F, --list-free [<dev> ...]       list unpartitioned free areas of each device
-r, --reorder <dev>               fix partitions order (by start offset)
    --delete <dev> [<part> ...]   delete all or specified partitions
--part-label <dev> <part> [<str>] print or change partition label
--part-type <dev> <part> [<type>] print or change partition type
--part-uuid <dev> <part> [<uuid>] print or change partition uuid
--part-attrs <dev> <part> [<str>] print or change partition attributes

New Options:

-a, --append                   append partitions to existing partition table
-b, --backup                   backup partition table sectors (see -O)
    --bytes                    print SIZE in bytes rather than in human readable format
    --move-data[=<typescript>] move partition data after relocation (requires -N)
    --color[=<when>]           colorize output (auto, always or never)
                               colors are enabled by default
-N, --partno <num>             specify partition number
-n, --no-act                   do everything except write to device
    --no-tell-kernel           do not tell kernel about changes
-O, --backup-file <path>       override default backup file name
-o, --output <list>            output columns
-w, --wipe <mode>              wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>   wipe signatures from new partitions (auto, always or never)
-X, --label <name>             specify label type (dos, gpt, ...)
-Y, --label-nested <name>      specify nested label type (dos, bsd)

Available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start  End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

swapon (enable devices and files for paging and swapping):

-o, --options <list>     comma-separated list of swap options

New available columns (for --show):

UUID   swap uuid
LABEL  swap label

unshare (run a program with some namespaces unshared from the parent):

-C, --cgroup[=<file>]                              unshare cgroup namespace
    --propagation slave|shared|private|unchanged   modify mount propagation in mount namespace
-s, --setgroups allow|deny                         control the setgroups syscall in user namespaces
Deprecated / removed options

sfdisk (display or manipulate a disk partition table):

-c, --id                  change or print partition Id
    --change-id           change Id
    --print-id            print Id
-C, --cylinders <number>  set the number of cylinders to use
-H, --heads <number>      set the number of heads to use
-S, --sectors <number>    set the number of sectors to use
-G, --show-pt-geometry    deprecated, alias to --show-geometry
-L, --Linux               deprecated, only for backward compatibility
-u, --unit S              deprecated, only sector unit is supported

Benjamin Mako Hill: Children’s Perspectives on Critical Data Literacies

19 May, 2017 - 07:51

Last week, we presented a new paper that describes how children are thinking through some of the implications of new forms of data collection and analysis. The presentation was given at the ACM CHI conference in Denver last week and the paper is open access and online.

Over the last couple years, we’ve worked on a large project to support children in doing — and not just learning about — data science. We built a system, Scratch Community Blocks, that allows the 18 million users of the Scratch online community to write their own computer programs — in Scratch of course — to analyze data about their own learning and social interactions. An example of one of those programs to find how many of one’s follower in Scratch are not from the United States is shown below.

Last year, we deployed Scratch Community Blocks to 2,500 active Scratch users who, over a period of several months, used the system to create more than 1,600 projects.

As children used the system, Samantha Hautea, a student in UW’s Communication Leadership program, led a group of us in an online ethnography. We visited the projects children were creating and sharing. We followed the forums where users discussed the blocks. We read comment threads left on projects. We combined Samantha’s detailed field notes with the text of comments and forum posts, with ethnographic interviews of several users, and with notes from two in-person workshops. We used a technique called grounded theory to analyze these data.

What we found surprised us. We expected children to reflect on being challenged by — and hopefully overcoming — the technical parts of doing data science. Although we certainly saw this happen, what emerged much more strongly from our analysis was detailed discussion among children about the social implications of data collection and analysis.

In our analysis, we grouped children’s comments into five major themes that represented what we called “critical data literacies.” These literacies reflect things that children felt were important implications of social media data collection and analysis.

First, children reflected on the way that programmatic access to data — even data that was technically public — introduced privacy concerns. One user described the ability to analyze data as, “creepy”, but at the same time, “very cool.” Children expressed concern that programmatic access to data could lead to “stalking“ and suggested that the system should ask for permission.

Second, children recognized that data analysis requires skepticism and interpretation. For example, Scratch Community Blocks introduced a bug where the block that returned data about followers included users with disabled accounts. One user, in an interview described to us how he managed to figure out the inconsistency:

At one point the follower blocks, it said I have slightly more followers than I do. And, that was kind of confusing when I was trying to make the project. […] I pulled up a second [browser] tab and compared the [data from Scratch Community Blocks and the data in my profile].

Third, children discussed the hidden assumptions and decisions that drive the construction of metrics. For example, the number of views received for each project in Scratch is counted using an algorithm that tries to minimize the impact of gaming the system (similar to, for example, Youtube). As children started to build programs with data, they started to uncover and speculate about the decisions behind metrics. For example, they guessed that the view count might only include “unique” views and that view counts may include users who do not have accounts on the website.

Fourth, children building projects with Scratch Community Blocks realized that an algorithm driven by social data may cause certain users to be excluded. For example, a 13-year-old expressed concern that the system could be used to exclude users with few social connections saying:

I love these new Scratch Blocks! However I did notice that they could be used to exclude new Scratchers or Scratchers with not a lot of followers by using a code: like this: when flag clicked if then user’s followers < 300 stop all. I do not think this a big problem as it would be easy to remove this code but I did just want to bring this to your attention in case this not what you would want the blocks to be used for.

Fifth, children were concerned about the possibility that measurement might distort the Scratch community’s values. While giving feedback on the new system, a user expressed concern that by making it easier to measure and compare followers, the system could elevate popularity over creativity, collaboration, and respect as a marker of success in Scratch.

I think this was a great idea! I am just a bit worried that people will make these projects and take it the wrong way, saying that followers are the most important thing in on Scratch.

Kids’ conversations around Scratch Community Blocks are good news for educators who are starting to think about how to engage young learners in thinking critically about the implications of data. Although no kid using Scratch Community Blocks discussed each of the five literacies described above, the themes reflect starting points for educators designing ways to engage kids in thinking critically about data.

Our work shows that if children are given opportunities to actively engage and build with social and behavioral data, they might not only learn how to do data analysis, but also reflect on its implications.

This blog-post and the work that it describes is a collaborative project by Samantha Hautea, Sayamindu Dasgupta, and Benjamin Mako Hill. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson from MIT CSAIL. Financial support came from the US National Science Foundation.

Alessio Treglia: Digital Ipseity: Which Identity?

18 May, 2017 - 15:50

 

Within the next three years, more than seven billion people and businesses will be connected to the Internet. During this time of dramatic increases in access to the Internet, networks have seen an interesting proliferation of systems for digital identity management (i.e. our SPID in Italy). But what is really meant by “digital identity“? All these systems are implemented in order to have the utmost certainty that the data entered by the subscriber (address, name, birth, telephone, email, etc.) is directly coincident with that of the physical person. In other words, data are certified to be “identical” to those of the user; there is a perfect overlap between the digital page and the authentic user certificate: an “idem“, that is, an identity.

This identity is our personal records reflected on the net, nothing more than that. Obviously, this data needs to be appropriately protected from malicious attacks by means of strict privacy rules, as it contains so-called “sensitive” information, but this data itself is not sufficiently interesting for the commercial market, except for statistical purposes on homogeneous population groups. What may be a real goldmine for the “web company” is another type of information: user’s ipseity. It is important to immediately remove the strong semantic ambiguity that weighs on the notion of identity. There are two distinct meanings…

<Read More…[by Fabio Marzocca]>

Michael Prokop: Debugging a mystery: ssh causing strange exit codes?

18 May, 2017 - 14:29

Recently we had a WTF moment at a customer of mine which is worth sharing.

In an automated deployment procedure we’re installing Debian systems and setting up MySQL HA/Scalability. Installation of the first node works fine, but during installation of the second node something weird is going on. Even though the deployment procedure reported that everything went fine: it wasn’t fine at all. After bisecting to the relevant command lines where it’s going wrong we identified that the failure is happening between two ssh/scp commands, which are invoked inside a chroot through a shell wrapper. The ssh command caused a wrong exit code showing up: instead of bailing out with an error (we’re running under ‘set -e‘) it returned with exit code 0 and the deployment procedure continued, even though there was a fatal error. Initially we triggered the bug when two ssh/scp command lines close to each other were executed, but I managed to find a minimal example for demonstration purposes:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

What we’d expect is the following behavior, receive exit code 1 from the last command line in the chroot wrapper:

# ./ssh_wrapper 
return code = 1

But what we actually get is exit code 0:

# ./ssh_wrapper 
return code = 0

Uhm?! So what’s going wrong and what’s the fix? Let’s find out what’s causing the problem:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost command_does_not_exist >/dev/null 2>&1
exit "$?"
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 127

Ok, so if we invoke it with a binary that does not exist we properly get exit code 127, as expected.
What about switching /bin/bash to /bin/sh (which corresponds to dash here) to make sure it’s not a bash bug:

# cat ssh_wrapper 
chroot << "EOF" / /bin/sh
ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Oh, but that works as expected!?

When looking at this behavior I had the feeling that something is going wrong with file descriptors. So what about wrapping the ssh command line within different tools? No luck with `stdbuf -i0 -o0 -e0 ssh root@localhost hostname`, nor with `script -c “ssh root@localhost hostname” /dev/null` and also not with `socat EXEC:”ssh root@localhost hostname” STDIO`. But it works under unbuffer(1) from the expect package:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
unbuffer ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

So my bet on something with the file descriptor handling was right. Going through the ssh manpage, what about using ssh’s `-n` option to prevent reading from standard input (stdin)?

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh -n root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Bingo! Quoting ssh(1):

     -n      Redirects stdin from /dev/null (actually, prevents reading from stdin).
             This must be used when ssh is run in the background.  A common trick is
             to use this to run X11 programs on a remote machine.  For example,
             ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi,
             and the X11 connection will be automatically forwarded over an encrypted
             channel.  The ssh program will be put in the background.  (This does not work
             if ssh needs to ask for a password or passphrase; see also the -f option.)

Let’s execute the scripts through `strace -ff -s500 ./ssh_wrapper` to see what’s going in more detail.
In the strace run without ssh’s `-n` option we see that it’s cloning stdin (file descriptor 0), getting assigned to file descriptor 4:

dup(0)            = 4
[...]
read(4, "exit 1\n", 16384) = 7

while in the strace run with ssh’s `-n` option being present there’s no file descriptor duplication but only:

open("/dev/null", O_RDONLY) = 4

This matches ssh.c’s ssh_session2_open function (where stdin_null_flag corresponds to ssh’s `-n` option):

        if (stdin_null_flag) {                                            
                in = open(_PATH_DEVNULL, O_RDONLY);
        } else {
                in = dup(STDIN_FILENO);
        }

This behavior can also be simulated if we explicitly read from /dev/null, and this indeed works as well:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null </dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

The underlying problem is that both bash and ssh are consuming from stdin. This can be verified via:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
echo "Inner: pre"
while read line; do echo "Eat: $line" ; done
echo "Inner: post"
exit 3
EOF
echo "Outer: exit code = $?"

# ./ssh_wrapper
Inner: pre
Eat: echo "Inner: post"
Eat: exit 3
Outer: exit code = 0

This behavior applies to bash, ksh, mksh, posh and zsh. Only dash doesn’t show this behavior.
To understand the difference between bash and dash executions we can use the following test scripts:

# cat stdin-test-cmp
#!/bin/sh

TEST_SH=bash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-bash.out
TEST_SH=dash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-dash.out

# cat stdin-test
#!/bin/sh

: ${TEST_SH:=dash}

$TEST_SH <<"EOF"
echo "Inner: pre"
while read line; do echo "Eat: $line"; done
echo "Inner: post"
exit 3
EOF

echo "Outer: exit code = $?"

When executing `./stdin-test-cmp` and comparing the generated files stdin-test-bash.out and stdin-test-dash.out you’ll notice that dash consumes all stdin in one single go (a single `read(0, …)`), instead of character-by-character as specified by POSIX and implemented by bash, ksh, mksh, posh and zsh. See stdin-test-bash.out on the left side and stdin-test-dash.out on the right side in this screenshot:

So when ssh tries to read from stdin there’s nothing there anymore.

Quoting POSIX’s sh section:

When the shell is using standard input and it invokes a command that also uses standard input, the shell shall ensure that the standard input file pointer points directly after the command it has read when the command begins execution. It shall not read ahead in such a manner that any characters intended to be read by the invoked command are consumed by the shell (whether interpreted by the shell or not) or that characters that are not read by the invoked command are not seen by the shell. When the command expecting to read standard input is started asynchronously by an interactive shell, it is unspecified whether characters are read by the command or interpreted by the shell.

If the standard input to sh is a FIFO or terminal device and is set to non-blocking reads, then sh shall enable blocking reads on standard input. This shall remain in effect when the command completes.

So while we learned that both bash and ssh are consuming from stdin and this needs to prevented by either using ssh’s `-n` or explicitly specifying stdin, we also noticed that dash’s behavior is different from all the other main shells and could be considered a bug.

Lessons learned:

  • Be aware of ssh’s `-n` option when using ssh/scp inside scripts.
  • Feeding shell scripts via stdin is not only error-prone but also very inefficient, as for a standards compliant implementation it requires a read(2) system call per byte of input. Instead create a temporary script you safely execute then.
  • When debugging problems make sure to explore different approaches and tools to ensure you’re not relying on a buggy behavior in any involved tool.

Thanks to Guillem Jover for review and feedback regarding this blog post.

Tianon Gravi: My Docker Install Process (redux)

18 May, 2017 - 13:00

Since I wrote my first post on this topic, Docker has switched from https://apt.dockerproject.org/repo to https://download.docker.com/linux/debian, so this post revisits my original steps, but tailored for the new repo.

There will be less commentary this time (straight to the beef). For further commentary on “why” for any step, see my previous post.

These steps should be fairly similar to what’s found in upstream’s “Install Docker on Debian” document, but do differ slightly in a few minor ways.

grab Docker’s APT repo GPG key
# "Docker Release (CE deb)"

export GNUPGHOME="$(mktemp -d)"
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 9DC858229FC7DD38854AE2D88D81803C0EBFCD88

# stretch+
gpg --export --armor 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 | sudo tee /etc/apt/trusted.gpg.d/docker.gpg.asc

# jessie
# gpg --export 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 | sudo tee /etc/apt/trusted.gpg.d/docker.gpg > /dev/null

rm -rf "$GNUPGHOME"

Verify:

$ apt-key list
...

/etc/apt/trusted.gpg.d/docker.gpg.asc
-------------------------------------
pub   rsa4096 2017-02-22 [SCEA]
      9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
sub   rsa4096 2017-02-22 [S]

...
add Docker’s APT source

With the switch to download.docker.com, HTTPS is now mandated:

$ apt-get update && apt-get install apt-transport-https

Setup sources.list:

echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable" | sudo tee /etc/apt/sources.list.d/docker.list

Add edge component for every-month releases and test for release candidates (ie, ... stretch stable edge). Replace stretch with jessie for Jessie installs.

At this point, you should be safe to run apt-get update to verify the changes:

$ sudo apt-get update
...
Get:5 https://download.docker.com/linux/debian stretch/stable amd64 Packages [1227 B]
...
Reading package lists... Done

(There shouldn’t be any warnings or errors about missing keys, etc.)

configure Docker

This step could be done after Docker’s installed (and indeed, that’s usually when I do it because I forget that I should until I’ve got Docker installed and realize that my configuration is suboptimal), but doing it before ensures that Docker doesn’t have to be restarted later.

sudo mkdir -p /etc/docker
sudo sensible-editor /etc/docker/daemon.json

(sensible-editor can be replaced by whatever editor you prefer, but that command should choose or prompt for a reasonable default)

I then fill daemon.json with at least a default storage-driver. Whether I use aufs or overlay2 depends on my kernel version and available modules – if I’m on Ubuntu, AUFS is still a no-brainer (since it’s included in the default kernel if the linux-image-extra-XXX/linux-image-extra-virtual package is installed), but on Debian AUFS is only available in either 3.x kernels (jessie’s default non-backports kernel) or recently in the aufs-dkms package (as of this writing, still only available on stretch and sid – no jessie-backports option).

If my kernel is 4.x+, I’m likely going to choose overlay2 (or if that errors out, the older overlay driver).

Choosing an appropriate storage driver is a fairly complex topic, and I’d recommend that for serious production deployments, more research on pros and cons is performed than I’m including here (especially since AUFS and OverlayFS are not the only options – they’re just the two I personally use most often).

{
	"storage-driver": "overlay2"
}
configure boot parameters

I usually set a few boot parameters as well (in /etc/default/grub’s GRUB_CMDLINE_LINUX_DEFAULT option – run sudo update-grub after adding these, space-separated).

  • cgroup_enable=memory – enable “memory accounting” for containers (allows docker run --memory for setting hard memory limits on containers)
  • swapaccount=1 – enable “swap accounting” for containers (allows docker run --memory-swap for setting hard swap memory limits on containers)
  • systemd.legacy_systemd_cgroup_controller=yes – newer versions of systemd may disable the legacy cgroup interfaces Docker currently uses; this instructs systemd to keep those enabled (for more details, see systemd/systemd#4628, opencontainers/runc#1175, docker/docker#28109)
  • vsyscall=emulate – allow older binaries to run (debian:wheezy, etc.; see docker/docker#28705)

All together:

...
GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1 systemd.legacy_systemd_cgroup_controller=yes vsyscall=emulate"
...
install Docker!

Finally, the time has come.

$ sudo apt-get install -V docker-ce
...
   docker-ce (17.03.1~ce-0~debian-stretch)
...

$ sudo docker version
Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:07:28 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:07:28 2017
 OS/Arch:      linux/amd64
 Experimental: false

$ sudo usermod -aG docker "$(id -un)"

Daniel Pocock: Hacking the food chain in Switzerland

18 May, 2017 - 01:41

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101).

The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis.

One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides.

An open source approach to food

An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable.

Do we need it?

Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need.

Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers.

Next steps

People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest.

Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this?

Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum.

There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc.

If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich.

One final thing to contemplate: if you are not hacking your own food supply, who is?

Reproducible builds folks: Reproducible Builds: week 107 in Stretch cycle

17 May, 2017 - 23:08

Here's what happened in the Reproducible Builds effort between Sunday May 7 and Saturday May 13 2017:

Report from Reproducible Builds Hamburg Hackathon

We were 16 participants from 12 projects: 7 Debian, 2 repeatr.io, 1 ArchLinux, 1 coreboot + LEDE, 1 F-Droid, 1 ElectroBSD + privoxy, 1 GNU R, 1 in-toto.io, 1 Meson and 1 openSUSE. Three people came from the USA, 3 from the UK, 2 Finland, 1 Austria, 1 Denmark and 6 from Germany, plus we several guests from our gracious hosts at the CCCHH hackerspace as well as a guest from Australia…

We had four presentations:

Some of the things we worked on:

  • h01ger did orga stuff for this very hackathon, discussed tests.r-b.o with various non-Debian contributors, filed some bugs and restarted the policy discussion in #844431. He also did some polishing work on tests.r-b.o which shall be covered in next issue of our weekly blog.
  • Justin Cappos involved many of us in interesting discussions and started to write an academic paper about Reproducible Builds of which he shared an early beta on our mailinglist.
  • Chris Lamb (lamby) filed a number of patches for individual packages, worked on diffoscope, merged many changes to strip-nondeterminism and also filed #862073 against dak to upload buildinfo files to external services.
  • Maria Glukhova (siamezzze) fixed a bug with plots on tests.reproducible-builds.org and worked on diffoscope test coverage.
  • Lynxis worked on a new squashfs upstream release improving support for reproducible squashfs filesystems and also had some time to hack on coreboot and show others how to install coreboot on real hardware.
  • Michael Poehn worked on integrating F-Droid builds into tests.reproducible-builds.org, on the F-Droid verification utility and also ran some app reproducibility tests.
  • Bernhard worked on various unreproducible issues upstream and submitted fixes for curl, bzr, [ant](https://bz.apache.org/bugzilla/show_bug.cgi?id=61079].
  • Erin Myhre worked on bootstrapping cleanroom builds of compiler components in Repeatr sandboxes.
  • Calvin Behling merged improvements to reppl for a cleaner storage format and better error handling and did design work for next version of repeatr pipeline execution. Calvin also lead the reproducibility testing of restaurant mood lighting.
  • Eric and Calvin also claim to have had all sorts of useful exchanges about the state of other projects, and learned a lot about where to look for more info about debian bootstrap and archive mirroring from steven and lamby
  • Phil Hands came by to say hi and worked on testing d-i on jenkins.debian.net.
  • Chris West (Faux) worked on extending misc.git:has-only.py, and started looking at Britney.

We had a Debian focussed meeting where we discussed a number of topics:

  • IRC meetings: yes, we want to try again to have them, monthly, a poll for a good date is being held.
  • Debian tests post Stretch: we'll add tests for stable/Stretch.
  • .buildinfo files, how forward: we need sourceful uploads for any arch:all packages. dak should send .buildinfo files to buildinfo.debian.net.
  • (pre?) Stretch release press release: we should do that, esp. as our achievements are largely unrelated to Stretch.
  • Reproducible Builds Summit 3: yes, we want that.
  • what to do (in notes.git) with resolved issues: keep the issues.
  • strip-nondeterminism quo vadis: Justin reminded us that strip-nondeterminism is a workaround we want to get rid off.

And then we also had a lot of fun in the hackerspace, enjoying some of their gimmicks, such as being able to open physical doors with ssh or controlling light and music with an webbrowser without authentication (besides being in the right network).

(This wasn't the hackathon per-se, but some of us appreciated these sights and so we thought you would too.)

Many thanks to:

  • Debian for sponsoring food and accomodation!
  • Dock Europe for providing us with really nice accomodation in the house!
  • CCC Hamburg for letting us use their hackerspace for >3 days non-stop!
News and media coverage

openSUSE has had a security breach in their infrastructure, including their build services. As of this writing, the scope and impact are still unclear, however the incident illustrates that no one should rely on being able to secure their infrastructure at all times. Reproducible Builds help mitigate this by allowing independent verification of build results, by parties that are unaffected by the compromise.

(Whilst this can happen to anyone. Kudos to openSUSE for being open about it. Now let's continue working on Reproducible Builds everywhere!)

On May 13th Chris Lamb gave a talk on Reproducible Builds at OSCAL 2017 in Tirana, Albania.

Toolchain bug reports and fixes Packages' bug reports Reviews of unreproducible packages

11 package reviews have been added, 2562 have been updated and 278 have been removed in this week, adding to our knowledge about identified issues. Most of the updates were to move ~1800 packages affected by the generic catch-all captures_build_path (out of ~2600 total) to the more specific gcc_captures_build_path, fixed by our proposed patches to GCC.

5 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (1)
  • Chris Lamb (2)
  • Chris West (1)
diffoscope development

diffoscope development continued on the experimental branch:

  • Maria Glukhova:
    • Code refactoring and more tests.
  • Chris Lamb:
    • Add safeguards against unpacking recursive or deeply-nested archives. (Closes: #780761)
strip-nondeterminism development
  • strip-nondeterminism 0.033-1 and -2 were uploaded to unstable by Chris Lamb. It included contributions from:

  • Bernhard M. Wiedemann:

    • Add cpio handler.
    • Code quality improvements.
  • Chris Lamb:
    • Add documentation and increase verbosity, in support of the long-term aim of removing the need for this tool.
reprotest development
  • reprotest 0.6.1 and 0.6.2 were uploaded to unstable by Ximin Luo. It included contributions from:

  • Ximin Luo:

    • Add a documentation section on "Known bugs".
    • Move developer documentation away from the man page.
    • Mention release instructions in the previous changelog.
    • Preserve directory structure when copying artifacts. Otherwise hash output on a successful reproduction sometimes fails, because find(1) can't find the artifacts using the original artifact_pattern.
  • Chris Lamb
    • Add proper release instructions and a keyring.
trydiffoscope development
  • Chris Lamb:
    • Uses the diffoscope from Debian experimental if possible.
Misc.

This week's edition was written by Ximin Luo, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Jamie McClelland: Late to the Raspberry Pi party

17 May, 2017 - 21:46

I finally bought my first raspberry pi to setup as a router and wifi access point.

It wasn't easy.

I first had to figure out what to buy. I think that was the hardest part.

I ended up with:

  • Raspberry PI 3 Model B A1.2GHz 64-bit quad-core ARMv8 CPU, 1GB RAM (Model number: RASPBERRYPI3-MODB-1GB)
  • Transcend USB 3.0 SDHC / SDXC / microSDHC / SDXC Card Reader, TS-RDF5K (Black). I only needed this because I don't have one already and I will need a way to copy a raspbian image from my laptop to a micro SD card.
  • Centon Electronics Micro SD Card 16 GB (S1-MSDHC4-16G). This is the micro sd card.
  • Smraza Clear case for Raspberry Pi 3 2 Model B with Power Supply,2pcs Heatsinks and Micro USB with On/Off Switch. And this is the box to put it all in.

I already have a cable matters USB to ethernet device, which will provide the second ethernet connection so this device can actually work as a router.

I studiously followed the directions to download the raspbian image and copy it to my micro sd card. I also touched a file on the boot partition called ssh so ssh would start automatically. Note: I first touched the ssh file on the root partition (sdb2) before realizing it belonged on the boot partition (sdb1). And, despite ambiguous directions found on the Internet, lowercase 'ssh' for the filename seems to do the trick.

Then, I found the IP address with the help of NMAP (sudo nmap -sn 192.168.69.*) and tried to ssh in but alas...

Connection reset by 192.168.69.116 port 22

No dice.

So, I re-mounted the sdb2 partition of the micro sd card and looked in var/log/auth.log and found:

May  5 19:23:00 raspberrypi sshd[760]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
May  5 19:23:00 raspberrypi sshd[760]: fatal: No supported key exchange algorithms [preauth]
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format
May  5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_rsa_key
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format
May  5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_dsa_key
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format
May  5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format

How did that happen? And wait a minute...

0 jamie@turkey:~$ ls -l /mnt/etc/ssh/ssh_host_ecdsa_key
-rw------- 1 root root 0 Apr 10 05:58 /mnt/etc/ssh/ssh_host_ecdsa_key
0 jamie@turkey:~$ date
Fri May  5 15:44:15 EDT 2017
0 jamie@turkey:~$

Are the keys embedded in the image? Isn't that wrong?

I fixed with:

0 jamie@turkey:mnt$ sudo rm /mnt/etc/ssh/ssh_host_*
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_rsa_key -N '' -t rsa
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_dsa_key -N '' -t dsa
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_ed25519_key -N '' -t ed25519
0 jamie@turkey:mnt$

Then I could ssh in. I removed the pi user account and added my ssh key to /root/.ssh/authorized_keys and put a new name "mondragon" in the /etc/hostname file.

And... I upgraded to Debian stretch and rebooted.

I plugged my cable matters USB/Ethernet adapter into the device and connected it to my cable modem.

Next I started to configure the device to be a wifi access point using this excellend tutorial, but decided I wanted to setup my networks using systemd-networkd instead.

Since /etc/network/interaces already had eth0 set to manual (because apparently it is controlled by dhcpcd instead), I didn't need any modifications there.

However, I wanted to use the dhcp client built-in to systemd-networkd, so to prevent dhcpcd from obtaining an IP address, I purged dhcpcd:

apt-get purge dhcpcd

I was planning to also use systemd-networkd to name the devices (using *.link files) but nothing I could do could convince systemd to rename them, so I gave up and added /etc/udev/rules.d/70-persistent-net.rules:

    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="b8:27:eb:ce:b5:c3", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME:="wan"
    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="a0:ce:c8:01:20:7d", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME:="lan"
    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="b8:27:eb:9b:e0:96", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME:="wlan"

(If you are copying and pasting the mac addresses will have to change.)

Then I added the following files:

root@mondragon:~# head /etc/systemd/network/*
==> /etc/systemd/network/50-lan.network <==
[Match]
Name=lan

[Network]
Address=192.168.69.1/24

==> /etc/systemd/network/55-wlan.network <==
[Match]
Name=wlan

[Network]
Address=10.0.69.1/24

==> /etc/systemd/network/60-wan.network <==
[Match]
Name=wan

[Network]
DHCP=v4
IPForward=yes
IPMasquerade=yes
root@mondragon:~#

Sadly, IPMasquerade doesn't seem to work either for some reason, so...

root@mondragon:~# cat /etc/systemd/system/masquerade.service 
[Unit]
Description=Start masquerading because Masquerade=yes not working in wan.network.

[Service]
Type=oneshot
ExecStart=/sbin/iptables -t nat -A POSTROUTING -o wan -j MASQUERADE

[Install]
WantedBy=network.target
root@mondragon:~#

And, systemd DHCPServer worked, but then it didn't and I couldn't figure out how to debug, so...

apt-get install dnsmasq

Followed by:

root@mondragon:~# cat /etc/dnsmasq.d/mondragon.conf 
# Don't provide DNS services (unbound does that).
port=0

interface=lan
interface=wlan

# Only provide dhcp services since systemd-networkd dhcpserver seems
# flakey.
dhcp-range=set:cable,192.168.69.100,192.168.69.150,255.255.255.0,4h
dhcp-option=tag:cable,option:dns-server,192.168.69.1
dhcp-option=tag:cable,option:router,192.168.69.1

dhcp-range=set:wifi,10.0.69.100,10.0.69.150,255.255.255.0,4h
dhcp-option=tag:wifi,option:dns-server,10.0.69.1
dhcp-option=tag:wifi,option:router,10.0.69.1

root@mondragon:~#

It would probably be simpler to have dnsmasq provide DNS service also, but I happen to like unbound:

apt-get install unbound

And...

root@mondragon:~# cat /etc/unbound/unbound.conf.d/server.conf 
server:
    interface: 127.0.0.1
    interface: 192.168.69.1
    interface: 10.0.69.1

    access-control: 192.168.69.0/24 allow
    access-control: 10.0.69.0/24 allow

    # We do query localhost for our stub zone: loc.cx
    do-not-query-localhost: no

    # Up this level when debugging.
    log-queries: no
    logfile: ""
    #verbosity: 1

    # Settings to work better with systemcd
    do-daemonize: no
    pidfile: ""
root@mondragon:~# 

Now on to the wifi access point.

apt-get install hostapd

And the configuration file:

root@mondragon:~# cat /etc/hostapd/hostapd.conf
# This is the name of the WiFi interface we configured above
interface=wlan

# Use the nl80211 driver with the brcmfmac driver
driver=nl80211

# This is the name of the network
ssid=peacock

# Use the 2.4GHz band
hw_mode=g

# Use channel 6
channel=6

# Enable 802.11n
ieee80211n=1

# Enable WMM
wmm_enabled=1

# Enable 40MHz channels with 20ns guard interval
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]

# Accept all MAC addresses
macaddr_acl=0

# Use WPA authentication
auth_algs=1

# Require clients to know the network name
ignore_broadcast_ssid=0

# Use WPA2
wpa=2

# Use a pre-shared key
wpa_key_mgmt=WPA-PSK

# The network passphrase
wpa_passphrase=organicinternet

# Use AES, instead of TKIP
rsn_pairwise=CCMP
root@mondragon:~#

The hostapd package doesn't have a systemd start up file so I added one:

root@mondragon:~# cat /etc/systemd/system/hostapd.service 
[Unit]
Description=Hostapd IEEE 802.11 AP, IEEE 802.1X/WPA/WPA2/EAP/RADIUS Authenticator
Wants=network.target
Before=network.target
Before=network.service

[Service]
ExecStart=/usr/sbin/hostapd /etc/hostapd/hostapd.conf

[Install]
WantedBy=multi-user.target
root@mondragon:~#

My last step was to modify /etc/ssh/sshd_config so it only listens on the lan and wlan interfaces (listening on wlan is a bit of a risk, but also useful when mucking with the lan network settings to ensure I don't get locked out).

Michal &#268;iha&#345;: Weblate 2.14

17 May, 2017 - 21:00

Weblate 2.14 has been released today slightly ahead of the schedule. There are quite a lot of security improvements based on reports we got from HackerOne program, API extensions and other minor improvements.

Full list of changes:

  • Add glossary entries using AJAX.
  • The logout now uses POST to avoid CSRF.
  • The API key token reset now uses POST to avoid CSRF.
  • Weblate sets Content-Security-Policy by default.
  • The local editor URL is validated to avoid self-XSS.
  • The password is now validated against common flaws by default.
  • Notify users about imporant activity with their account such as password change.
  • The CSV exports now escape potential formulas.
  • Various minor improvements in security.
  • The authentication attempts are now rate limited.
  • Suggestion content is stored in the history.
  • Store important account activity in audit log.
  • Ask for password confirmation when removing account or adding new associations.
  • Show time when suggestion has been made.
  • There is new quality check for trailing semicolon.
  • Ensure that search links can be shared.
  • Included source string information and screenshots in the API.
  • Allow to overwrite translations through API upload.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Dirk Eddelbuettel: Upcoming Rcpp Talks

17 May, 2017 - 09:37

Very excited about the next few weeks which will cover a number of R conferences, workshops or classes with talks, mostly around Rcpp and one notable exception:

  • May 19: Rcpp: From Simple Examples to Machine learning, pre-conference workshop at our R/Finance 2017 conference here in Chicago

  • May 26: Extending R with C++: Motivation and Examples, invited keynote at R à Québec 2017 at Université Laval in Quebec City, Canada

  • June 28-29: Higher-Performance R Programming with C++ Extensions, two-day course at the Zuerich R Courses @ U Zuerich in Zuerich, Switzerland

  • July 3: Rcpp at 1000+ reverse depends: Some Lessons Learned (working title), at DSC 2017 preceding useR! 2017 in Brussels, Belgium

  • July 4: Extending R with C++: Motivation, Introduction and Examples, tutorial preceding useR! 2017 in Brussels, Belgium

  • July 5, 6, or 7: Hosting Data Packages via drat: A Case Study with Hurricane Exposure Data, accepted presentation, joint with Brooke Anderson

If you are near one those events, interested and able to register (for the events requiring registration), I would love to chat before or after.

Enrico Zini: Accident on the motorway

17 May, 2017 - 04:12

There was an accident on the motorway, luckily noone got seriously wounded, but a truckful of sugar and a truckful of cereals completely spilled on the motorway, and took some time to clean.

Daniel Pocock: Building an antenna and receiving ham and shortwave stations with SDR

17 May, 2017 - 01:34

In my previous blog on the topic of software defined radio (SDR), I provided a quickstart guide to using gqrx, GNU Radio and the RTL-SDR dongle to receive FM radio and the amateur 2 meter (VHF) band.

Using the same software configuration and the same RTL-SDR dongle, it is possible to add some extra components and receive ham radio and shortwave transmissions from around the world.

Here is the antenna setup from the successful SDR workshop at OSCAL'17 on 13 May:

After the workshop on Saturday, members of the OSCAL team successfully reconstructed the SDR and antenna at the Debian info booth on Sunday and a wide range of shortwave and ham signals were detected:

Here is a close-up look at the laptop, RTL-SDR dongle (above laptop), Ham-It-Up converter (above water bottle) and MFJ-971 ATU (on right):

Buying the parts Component Purpose, Notes Price/link to source RTL-SDR dongle Converts radio signals (RF) into digital signals for reception through the USB port. It is essential to buy the dongles for SDR with TCXO, the generic RTL dongles for TV reception are not stable enough for anything other than TV. ~ € 25 Enamelled copper wire, 25 meters or more Loop antenna. Thicker wire provides better reception and is more suitable for transmitting (if you have a license) but it is heavier. The antenna I've demonstrated at recent events uses 1mm thick wire. ~ € 10 4 (or more) ceramic egg insulators Attach the antenna to string or rope. Smaller insulators are better as they are lighter and less expensive. ~ € 10 4:1 balun The actual ratio of the balun depends on the shape of the loop (square, rectangle or triangle) and the point where you attach the balun (middle, corner, etc). You may want to buy more than one balun, for example, a 4:1 balun and also a 1:1 balun to try alternative configurations. Make sure it is waterproof, has hooks for attaching a string or rope and an SO-239 socket. from € 20 5 meter RG-58 coaxial cable with male PL-259 plugs on both ends If using more than 5 meters or if you want to use higher frequencies above 30MHz, use thicker, heavier and more expensive cables like RG-213. The cable must be 50 ohm. ~ € 10 Antenna Tuning Unit (ATU) I've been using the MFJ-971 for portable use and demos because of the weight. There are even lighter and cheaper alternatives if you only need to receive. ~ € 20 for receive only or second hand PL-259 to SMA male pigtail, up to 50cm, RG58 Joins the ATU to the upconverter. Cable must be RG58 or another 50 ohm cable ~ € 5 Ham It Up v1.3 up-converter Mixes the HF signal with a signal from a local oscillator to create a new signal in the spectrum covered by the RTL-SDR dongle ~ € 40 SMA (male) to SMA (male) pigtail Join the up-converter to the RTL-SDR dongle ~ € 2 USB charger and USB type B cable Used for power to the up-converter. A spare USB mobile phone charge plug may be suitable. ~ € 5 String or rope For mounting the antenna. A ligher and cheaper string is better for portable use while a stronger and weather-resistent rope is better for a fixed installation. € 5 Building the antenna

There are numerous online calculators for measuring the amount of enamelled copper wire to cut.

For example, for a centre frequency of 14.2 MHz on the 20 meter amateur band, the antenna length is 21.336 meters.

Add an extra 24 cm (extra 12 cm on each end) for folding the wire through the hooks on the balun.

After cutting the wire, feed it through the egg insulators before attaching the wire to the balun.

Measure the extra 12 cm at each end of the wire and wrap some tape around there to make it easy to identify in future. Fold it, insert it into the hook on the balun and twist it around itself. Use between four to six twists.

Strip off approximately 0.5cm of the enamel on each end of the wire with a knife, sandpaper or some other tool.

Insert the exposed ends of the wire into the screw terminals and screw it firmly into place. Avoid turning the screw too tightly or it may break or snap the wire.

Insert string through the egg insulators and/or the middle hook on the balun and use the string to attach it to suitable support structures such as a building, posts or trees. Try to keep it at least two meters from any structure. Maximizing the surface area of the loop improves the performance: a circle is an ideal shape, but a square or 4:3 rectangle will work well too.

For optimal performance, if you imagine the loop is on a two-dimensional plane, the first couple of meters of feedline leaving the antenna should be on the plane too and at a right angle to the edge of the antenna.

Join all the other components together using the coaxial cables.

Configuring gqrx for the up-converter and shortwave signals

Inspect the up-converter carefully. Look for the crystal and find the frequency written on the side of it. The frequency written on the specification sheet or web site may be wrong so looking at the crystal itself is the best way to be certain. On my Ham It Up, I found a crystal with 125.000 written on it, this is 125 MHz.

Launch gqrx, go to the File menu and select I/O devices. Change the LNB LO value to match the crystal frequency on the up-converter, with a minus sign. For my Ham It Up, I use the LNB LO value -125.000000 MHz.

Click OK to close the I/O devices window.

On the Input Controls tab, make sure Hardware AGC is enabled.

On the Receiver options tab, change the Mode value. Commercial shortwave broadcasts use AM and amateur transmission use single sideband: by convention, LSB is used for signals below 10MHz and USB is used for signals above 10MHz. To start exploring the 20 meter amateur band around 14.2 MHz, for example, use USB.

In the top of the window, enter the frequency, for example, 14.200 000 MHz.

Now choose the FFT Settings tab and adjust the Freq zoom slider. Zoom until the width of the display is about 100 kHZ, for example, from 14.15 on the left to 14.25 on the right.

Click the Play icon at the top left to start receiving. You may hear white noise. If you hear nothing, check the computer's volume controls, move the Gain slider (bottom right) to the maximum position and then lower the Squelch value on the Receiver options tab until you hear the white noise or a transmission.

Adjust the Antenna Tuner knobs

Now that gqrx is running, it is time to adjust the knobs on the antenna tuner (ATU). Reception improves dramatically when it is tuned correctly. Exact instructions depend on the type of ATU you have purchased, here I present instructions for the MFJ-971 that I have been using.

Turn the TRANSMITTER and ANTENNA knobs to the 12 o'clock position and leave them like that. Turn the INDUCTANCE knob while looking at the signals in the gqrx window. When you find the best position, the signal strength displayed on the screen will appear to increase (the animated white line should appear to move upwards and maybe some peaks will appear in the line).

When you feel you have found the best position for the INDUCTANCE knob, leave it in that position and begin turning the ANTENNA knob clockwise looking for any increase in signal strength on the chart. When you feel that is correct, begin turning the TRANSMITTER knob.

Listening to a transmission

At this point, if you are lucky, some transmissions may be visible on the gqrx screen. They will appear as darker colours in the waterfall chart. Try clicking on one of them, the vertical red line will jump to that position. For a USB transmission, try to place the vertical red line at the left hand side of the signal. Try dragging the vertical red line or changing the frequency value at the top of the screen by 100 Hz at a time until the station is tuned as well as possible.

Try and listen to the transmission and identify the station. Commercial shortwave broadcasts will usually identify themselves from time to time. Amateur transmissions will usually include a callsign spoken in the phonetic alphabet. For example, if you hear "CQ, this is Victor Kilo 3 Tango Quebec Romeo" then the station is VK3TQR. You may want to note down the callsign, time, frequency and mode in your log book. You may also find information about the callsign in a search engine.

The video demonstrates reception of a transmission from another country, can you identify the station's callsign and find his location?



Your browser does not support the video tag.

If you have questions about this topic, please come and ask on the Debian Hams mailing list. The gqrx package is also available in Fedora and Ubuntu but it is known to crash on startup in Ubuntu 17.04. Users of other distributions may also want to try the Debian Ham Blend bootable ISO live image as a quick and easy way to get started.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้