Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 9 min ago

David Moreno: Twitter feed for Planet Perl Iron Man

14 August, 2016 - 21:13

I like to read Planet Perl Iron Man, but since I’m less and less of a user of Google Reader these days, I prefer to follow interesting feeds on Twitter instead: I don’t have to digest all of the content on my timeline, only when I’m in the mood to see what’s going on out there. So, if you wanna follow the account, find it as @perlironman. If interested, you can also follow me. That is all.

David Moreno: Feedbag released under MIT license

14 August, 2016 - 21:13

I was contacted by Pivotal Labs regarding licensing of Feedbag. I guess releasing open source software as GPL only makes sense if you continue to live under a rock. I’ve bumped the version to 0.9 and released it under MIT license.

Feedbag 1.0, which I plan to work on during the following days will bring in a brand new shiny backend powered by Nokogiri, instead of Hpricot (I mean, give me a break, I’m trying to catch up with the Ruby community, after all I’m primarily a Perl guy :D) and hopefully I will be able to recreate most of the feed auto-discovery test suite that Mark Pilgrim retired (410 Gone) when he committed infosuicide.

Have a good weekend!

David Moreno: Deploying a Dancer app on Heroku

14 August, 2016 - 21:08

There’s a few different posts out there on how to run Perl apps, such as Mojolicious-based, on Heroku, but I’d like to show how to deploy a Perl Dancer application on Heroku.

The startup script of a Dancer application (bin/app.pl) can be used as a PSGI file. With that in mind, I was able to take the good work of Miyagawa’s Heroku buildpack for general PSGI apps and hack it a little bit to use Dancer’s, specifically. What I like about Miyagawa’s approach is that uses the fantastic cpanm and makes it available within your application, instead of the monotonous cpan, to solve dependencies.

Let’s make a simple Dancer app to show how to make this happen:

/tmp $ dancer -a heroku
+ heroku
+ heroku/bin
+ heroku/bin/app.pl
+ heroku/config.yml
+ heroku/environments
+ heroku/environments/development.yml
+ heroku/environments/production.yml
+ heroku/views
+ heroku/views/index.tt
+ heroku/views/layouts
+ heroku/views/layouts/main.tt
+ heroku/MANIFEST.SKIP
+ heroku/lib
heroku/lib/
+ heroku/lib/heroku.pm
+ heroku/public
+ heroku/public/css
+ heroku/public/css/style.css
+ heroku/public/css/error.css
+ heroku/public/images
+ heroku/public/500.html
+ heroku/public/404.html
+ heroku/public/dispatch.fcgi
+ heroku/public/dispatch.cgi
+ heroku/public/javascripts
+ heroku/public/javascripts/jquery.js
+ heroku/t
+ heroku/t/002_index_route.t
+ heroku/t/001_base.t
+ heroku/Makefile.PL

Now, you already know that by firing perl bin/app.pl you can get your development server up and running. So I’ll just proceed to show how to make this work on Heroku, you should already have your development environment configured for it:

/tmp $ cd heroku/
/tmp/heroku $ git init
Initialized empty Git repository in /private/tmp/heroku/.git/
/tmp/heroku :master $ git add .
/tmp/heroku :master $ git commit -a -m 'Dancer on Heroku'
[master (root-commit) 6c0c55a] Dancer on Heroku
22 files changed, 809 insertions(+), 0 deletions(-)
create mode 100644 MANIFEST
create mode 100644 MANIFEST.SKIP
create mode 100644 Makefile.PL
create mode 100755 bin/app.pl
create mode 100644 config.yml
create mode 100644 environments/development.yml
create mode 100644 environments/production.yml
create mode 100644 lib/heroku.pm
create mode 100644 public/404.html
create mode 100644 public/500.html
create mode 100644 public/css/error.css
create mode 100644 public/css/style.css
create mode 100755 public/dispatch.cgi
create mode 100755 public/dispatch.fcgi
create mode 100644 public/favicon.ico
create mode 100644 public/images/perldancer-bg.jpg
create mode 100644 public/images/perldancer.jpg
create mode 100644 public/javascripts/jquery.js
create mode 100644 t/001_base.t
create mode 100644 t/002_index_route.t
create mode 100644 views/index.tt
create mode 100644 views/layouts/main.tt
/tmp/heroku :master $

And now, run heroku create, please note the buildpack URL, http://github.com/damog/heroku-buildpack-perl.git:

/tmp/heroku :master $ heroku create --stack cedar --buildpack http://github.com/damog/heroku-buildpack-perl.git
Creating blazing-beach-7280... done, stack is cedar
http://blazing-beach-7280.herokuapp.com/ | git@heroku.com:blazing-beach-7280.git
Git remote heroku added
/tmp/heroku :master $

And just push:

/tmp/heroku :master $ git push heroku master
Counting objects: 34, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (30/30), done.
Writing objects: 100% (34/34), 40.60 KiB, done.
Total 34 (delta 3), reused 0 (delta 0)

-----> Heroku receiving push
-----> Fetching custom buildpack... done
-----> Perl/PSGI Dancer! app detected
-----> Bootstrapping cpanm
Successfully installed JSON-PP-2.27200
Successfully installed CPAN-Meta-YAML-0.008
Successfully installed Parse-CPAN-Meta-1.4404 (upgraded from 1.39)
Successfully installed version-0.99 (upgraded from 0.77)
Successfully installed Module-Metadata-1.000009
Successfully installed CPAN-Meta-Requirements-2.122
Successfully installed CPAN-Meta-2.120921
Successfully installed Perl-OSType-1.002
Successfully installed ExtUtils-CBuilder-0.280205 (upgraded from 0.2602)
Successfully installed ExtUtils-ParseXS-3.15 (upgraded from 2.2002)
Successfully installed Module-Build-0.4001 (upgraded from 0.340201)
Successfully installed App-cpanminus-1.5015
12 distributions installed
-----> Installing dependencies
Successfully installed ExtUtils-MakeMaker-6.62 (upgraded from 6.55_02)
Successfully installed YAML-0.84
Successfully installed Test-Simple-0.98 (upgraded from 0.92)
Successfully installed Try-Tiny-0.11
Successfully installed HTTP-Server-Simple-0.44
Successfully installed HTTP-Server-Simple-PSGI-0.14
Successfully installed URI-1.60
Successfully installed Test-Tester-0.108
Successfully installed Test-NoWarnings-1.04
Successfully installed Test-Deep-0.110
Successfully installed LWP-MediaTypes-6.02
Successfully installed Encode-Locale-1.03
Successfully installed HTTP-Date-6.02
Successfully installed HTML-Tagset-3.20
Successfully installed HTML-Parser-3.69
Successfully installed Compress-Raw-Bzip2-2.052 (upgraded from 2.020)
Successfully installed Compress-Raw-Zlib-2.054 (upgraded from 2.020)
Successfully installed IO-Compress-2.052 (upgraded from 2.020)
Successfully installed HTTP-Message-6.03
Successfully installed HTTP-Body-1.15
Successfully installed MIME-Types-1.35
Successfully installed HTTP-Negotiate-6.01
Successfully installed File-Listing-6.04
Successfully installed HTTP-Daemon-6.01
Successfully installed Net-HTTP-6.03
Successfully installed HTTP-Cookies-6.01
Successfully installed WWW-RobotRules-6.02
Successfully installed libwww-perl-6.04
Successfully installed Dancer-1.3097
29 distributions installed
-----> Installing Starman
Successfully installed Test-Requires-0.06
Successfully installed Hash-MultiValue-0.12
Successfully installed Devel-StackTrace-1.27
Successfully installed Test-SharedFork-0.20
Successfully installed Test-TCP-1.16
Successfully installed Class-Inspector-1.27
Successfully installed File-ShareDir-1.03
Successfully installed Filesys-Notify-Simple-0.08
Successfully installed Devel-StackTrace-AsHTML-0.11
Successfully installed Plack-0.9989
Successfully installed Net-Server-2.006
Successfully installed HTTP-Parser-XS-0.14
Successfully installed Data-Dump-1.21
Successfully installed Starman-0.3001
14 distributions installed
-----> Discovering process types
Procfile declares types -> (none)
Default types for Perl/PSGI Dancer! -> web
-----> Compiled slug size is 2.7MB
-----> Launching... done, v4
http://blazing-beach-7280.herokuapp.com deployed to Heroku

To git@heroku.com:blazing-beach-7280.git
* [new branch] master -> master
/tmp/heroku :master $

And you can confirm it works:


Please note that the environment it runs on is “deployment”. The backend server it uses is the great Starman, also by the great Miyagawa.

Now, if you add or change dependencies on Makefile.PL, next time you push, those will get updated.

Very cool, right? :)

Jamie McClelland: Noam use Gnome

14 August, 2016 - 07:41

I don't quite remember when I read John Goerzen's post about teaching a 4 year old to use the linux command line with audio on planet Debian. According to the byline it was published nearly 2 years before Noam was born, but I seem to remember reading it in the weeks after his birth when I was both thrilled at the prospect of teaching my kid to use the command line and, in my sleepless stupor, not entirely convinced he would ever be old enough.

Well, the time came this morning. He found an old USB key board and discovered that a green light came on when he plugged it in. He was happily hitting the keys when Meredith suggested we turn on the monitor and open a program so he could see the letters appear on the screen and try to spell his name.

After 10 minutes in Libre Office I remembered John's blog and was inspired to start writing a bash script in my head (I would have to stop the fun with Libre Office to write it so the pressure was on...). In the end it was only a few minutes and I came up with:

#!/bin/bash

while [ 1 ]; do
  read -p "What shall I say? "
  espeak "$REPLY"
done

It was a hit. He said what he wanted to hear and hit the keys, my job was to spell for him.

Oh, also: he discovered key combinations that did things that were unsurprising to me (like taking the screen grab above) and also things that I'm still scratching my head about (like causing a prompt on the screen that said: "Downloading shockwave plugin." No thanks. And, how did he do that?

Russ Allbery: git-pbuilder 1.42

14 August, 2016 - 04:31

A minor update to my glue script for building software with pdebuild and git-buildpackage. (Yes, still needs to get rewritten in Python.)

This release stops using the old backport location for oldstable builds since oldstable is now wheezy, which merged the backports archive into the regular archive location. The old location is still there for squeeze just in case anyone needs it.

It also adds a new environment variable, GIT_PBUILDER_PDEBUILDOPTIONS, that can be used to pass options directly to pdebuild. Previously, there was only a way to pass options to the builder, via pdebuild, but not to configure pdebuild itself. There are some times when that's useful, such as to pass --use-pdebuild-internal. This was based on a patch from Rafał Długołęcki.

You can get the latest version of git-pbuilder from my scripts page.

Dariusz Dwornikowski: Automatic PostgreSQL config with Ansible

13 August, 2016 - 23:18

If for some reasons you can’t use dedicated DBaaS for your PostgreSQL (like AWS RDS) then you need to run your database server on a cloud instance. In these kind of setup, when you scale up or down your instance size, you need to adjust PostgreSQL parameters according to the changing RAM size. There are several parameters in PostgreSQL that highly depend on RAM size. An example is shared_buffers for which a rule of thumb says that is should be set to 0.25*RAM.

In DBaaS, when you scale the DB instance up or down, parameters are adjusted for you by the cloud provider, e.g. AWS RDS uses parameter groups for that reason, where particular parameters are defined depending on the size of the RAM of the RDS instance.

So what can you when you do not have RDS or any other DBaaS? You can always keep several configuration files on your instance, each for a different memory size, you can rewrite you config every time you change the size of the instance… or you can use Ansible role for that.

Our Ansible role will be very simple, we will have two tasks. One will change the PostgreSQL config, the second one will just restart the database server:

---
- name: Update PostgreSQL config
  template: src=postgresql.conf.j2 dest=/etc/postgresql/9.5/main/postgresql.conf
  register: pgconf

- name: Restart postgresql
  service: name=postgresql state=restarted
  when: pgconf.changed

Now we need the template, where are the calculations take place. RAM size will be taken from the Ansible’s fact called ansible_memtotal_mb. Since it returns RAM size in MBs, we will stick to MBs. We will define the following parameters, you can adjust them to your needs:

  • shared_buffers, as 0.25*RAM size,
  • work_mem, as shared_buffers/max_connections,
  • maintenance_work_mem, as RAM GBs times 64MB,
  • effective_cache_size, as 0.75*RAM size.

For max_connections we will define a default role variable of 100 but we will allow to specify it at a runtime. The relevant parts of the postgresql.conf.j2 are below:

 max_connections = {{ max_connections }}      
 shared_buffers = {{ (((ansible_memtotal_mb/1024.0)|round|int)*0.25)|int*1024 }}MB
 work_mem = {{ ((((ansible_memtotal_mb/1024.0)|round|int)*0.25)/max_connections*1024)|round|int }}MB
 maintenance_work_mem = {{ ((ansible_memtotal_mb/1024.0)|round|int)*64 }}MB
 effective_cache_size = {{ (((ansible_memtotal_mb/1024.0)|round|int)*0.75)|int*1024 }}MB

You can now run the role every time you change the instance size, and the config will be changed accordingly to the RAM size. You can extend the role and maybe add other constraints and change max_connections to you specific needs. An example playbook could look like:

---
hosts: my_postgres
roles:
  - postgres-config 
vars:
  - max_connection: 300

And run it:

$ ansible-playbook playbook.yml

The complete role can be found in my github repo.

Russell Coker: SSD and M.2

13 August, 2016 - 21:35
The Need for Speed

One of my clients has an important server running ZFS. They need to have a filesystem that detects corruption, while regular RAID is good for the case where a disk gives read errors it doesn’t cover the case where a disk returns bad data and claims it to be good (which I’ve witnessed in BTRFS and ZFS systems). BTRFS is good for the case of a single disk or a RAID-1 array but I believe that the RAID-5 code for BTRFS is not sufficiently tested for business use. ZFS doesn’t perform very well due to the checksums on data and metadata requiring multiple writes for a single change which also causes more fragmentation. This isn’t a criticism of ZFS, it’s just an engineering trade-off for the data integrity features.

ZFS supports read-caching on a SSD (the L2ARC) and write-back caching (ZIL). To get the best benefit of L2ARC and ZIL you need fast SSD storage. So now with my client investigating 10 gigabit Ethernet I have to investigate SSD.

For some time SSDs have been in the same price range as hard drives, starting at prices well below $100. Now there are some SSDs on sale for as little as $50. One issue with SATA for server use is that SATA 3.0 (which was released in 2009 and is most commonly used nowadays) is limited to 600MB/s. That isn’t nearly adequate if you want to serve files over 10 gigabit Ethernet. SATA 3.2 was released in 2013 and supports 1969MB/s but I doubt that there’s much hardware supporting that. See the SATA Wikipedia page for more information.

Another problem with SATA is getting the devices physically installed. My client has a new Dell server that has plenty of spare PCIe slots but no spare SATA connectors or SATA power connectors. I could have removed the DVD drive (as I did for some tests before deploying the server) but that’s ugly and only gives 1 device while you need 2 devices in a RAID-1 configuration for ZIL.

M.2

M.2 is a new standard for expansion cards, it supports SATA and PCIe interfaces (and USB but that isn’t useful at this time). The wikipedia page for M.2 is interesting to read for background knowledge but isn’t helpful if you are about to buy hardware.

The first M.2 card I bought had a SATA interface, then I was unable to find a local company that could sell a SATA M.2 host adapter. So I bought a M.2 to SATA adapter which made it work like a regular 2.5″ SATA device. That’s working well in one of my home PCs but isn’t what I wanted. Apparently systems that have a M.2 socket on the motherboard will usually take either SATA or NVMe devices.

The most important thing I learned is to buy the SSD storage device and the host adapter from the same place then you are entitled to a refund if they don’t work together.

The alternative to the SATA (AHCI) interface on an M.2 device is known as NVMe (Non-Volatile Memory Express), see the Wikipedia page for NVMe for details. NVMe not only gives a higher throughput but it gives more command queues and more commands per queue which should give significant performance benefits for a device with multiple banks of NVRAM. This is what you want for server use.

Eventually I got a M.2 NVMe device and a PCIe card for it. A quick test showed sustained transfer speeds of around 1500MB/s which should permit saturating a 10 gigabit Ethernet link in some situations.

One annoyance is that the M.2 devices have a different naming convention to regular hard drives. I have devices /dev/nvme0n1 and /dev/nvme1n1, apparently that is to support multiple storage devices on one NVMe interface. Partitions have device names like /dev/nvme0n1p1 and /dev/nvme0n1p2.

Power Use

I recently upgraded my Thinkpad T420 from a 320G hard drive to a 500G SSD which made it faster but also surprisingly quieter – you never realise how noisy hard drives are until they go away. My laptop seemed to feel cooler, but that might be my imagination.

The i5-2520M CPU in my Thinkpad has a TDP of 35W but uses a lot less than that as I almost never have 4 cores in use. The z7k320 320G hard drive is listed as having 0.8W “low power idle” and 1.8W for read-write, maybe Linux wasn’t putting it in the “low power idle” mode. The Samsung 500G 850 EVO SSD is listed as taking 0.4W when idle and up to 3.5W when active (which would not be sustained for long on a laptop). If my CPU is taking an average of 10W then replacing the hard drive with a SSD might have reduced the power use of the non-screen part by 10%, but I doubt that I could notice such a small difference.

I’ve read some articles about power use on the net which can be summarised as “SSDs can draw more power than laptop hard drives but if you do the same amount of work then the SSD will be idle most of the time and not use much power”.

I wonder if the SSD being slightly thicker than the HDD it replaced has affected the airflow inside my Thinkpad.

From reading some of the reviews it seems that there are M.2 storage devices drawing over 7W! That’s going to create some cooling issues on desktop PCs but should be OK in a server. For laptop use they will hopefully release M.2 devices designed for low power consumption.

The Future

M.2 is an ideal format for laptops due to being much smaller and lighter than 2.5″ SSDs. Spinning media doesn’t belong in a modern laptop and using a SATA SSD is an ugly hack when compared to M.2 support on the motherboard.

Intel has released the X99 chipset with M.2 support (see the Wikipedia page for Intel X99) so it should be commonly available on desktops in the near future. For most desktop systems an M.2 device would provide all the storage that is needed (or 2*M.2 in a RAID-1 configuration for a workstation). That would give all the benefits of reduced noise and increased performance that regular SSDs provide, but with better performance and fewer cables inside the PC.

For a corporate desktop PC I think the ideal design would have only M.2 internal storage and no support for 3.5″ disks or a DVD drive. That would allow a design that is much smaller than a current SFF PC.

Related posts:

  1. Breaking SATA Connectors I’ve just broken my second SATA connector. This isn’t a...
  2. How I Partition Disks Having had a number of hard drives fail over the...
  3. Swap Space and SSD In 2007 I wrote a blog post about swap space...

David Moreno: Perl 5.12's each function

13 August, 2016 - 15:27

With Perl 5.12 released earlier this summer, the each function got a nice little addition that I’d like to talk about: It now has the ability to work on arrays, not only key-value pair hashes, but not exactly as you’d expect it (not like Ruby’s each method).

The traditional way to work with each, using a hash:

my %h = (
    a => 1,
    b => 2,
    c => 3,
);

while(my($key, $value) = each %h) {
    say "index: $key => value: $value";
}

And the output. Of course, hashes being unordered lists, you won’t get the nice little order of an array.

index: c => value: 3
index: a => value: 1
index: b => value: 2

Now, when you use an array, each will return the next pair on the list consisting on the index of that element’s position and the position itself. Since it returns the next pair, you can iterate through it on the same fashion as when using a hash:

my @arr = ('a'..'z');

while(my($index, $value) = each @arr) {
    say "index: $index => value: $value";
}

This is particularly useful to access direct named variables, both the index and the element, while looping through an array.

index: 0 => value: a
index: 1 => value: b
index: 2 => value: c
index: 3 => value: d
index: 4 => value: e
index: 5 => value: f
index: 6 => value: g
index: 7 => value: h
index: 8 => value: i
index: 9 => value: j
index: 10 => value: k
index: 11 => value: l
index: 12 => value: m
index: 13 => value: n
index: 14 => value: o
index: 15 => value: p
index: 16 => value: q
index: 17 => value: r
index: 18 => value: s
index: 19 => value: t
index: 20 => value: u
index: 21 => value: v
index: 22 => value: w
index: 23 => value: x
index: 24 => value: y
index: 25 => value: z

David Moreno: Running find with two or more commands to -exec

13 August, 2016 - 15:25

I spent a couple of minutes today trying to understand how to make find (1) to execute two commands on the same target.

Instead of this or any similar crappy variants:

$ find . -type d -iname "*0" -mtime +60 -exec scp -r -P 1337 "{}" "meh.server.com:/mnt/1/backup/storage" && rm -rf "{}" \;

Try something like this:

$ find . -type d -iname "*0" -mtime +60 -exec scp -r -P 1337 "{}" "meh.server.com:/mnt/1/backup/storage" \; -exec rm -rf "{}" \;

Which is:

$ find . -exec command {} \; -exec other command {} \;

And you’re good to go.

David Moreno: Disable Nginx logging

13 August, 2016 - 15:23

This is something that is specified clearly on the Nginx manual, but it’s nice to have it as a quick reference.

The access_log and error_log directives on Nginx are on separate modules (HTTP Log and Core modules respectively) and they don’t behave the same way when all you want is to disable all logging on your server (in our case, we serve a gazillion static files and perform a lot of reverse proxying and we’re not interested on tracking that). It’s a common misconception that you can set error_log to off, because that’s how you disable access_log (if you do that, the server will still log to the file $nginx_path/off). Instead, you have to set error_log to log to the always mighty black hole /dev/null using the highest level for logging (which triggers the fewest events), crit:

http {
  server {
    # ...
    access_log off;
    error_log /dev/null crit;
    # ...
  }
  #...
}

If you’re the possessor of the blingest of bling-bling, you can disable all logging (not only for a server block), by putting error_log on the root of the configuration and access_log within your http block and make sure you don’t override that on any of the inner blocks. And you’re good to go.

David Moreno: RVM + Rake tasks on Cron jobs

13 August, 2016 - 15:20

RVM hates my guts. And it doesn’t matter, because I hate RVM back even more. Since I was technologically raised by aging wolfs, I have strong reasons to believe that you just shouldn’t have mixed versions of software on your production systems, specially, if a lot of things are poorly tested, like most of Ruby libraries, aren’t backward compatible. I was raised on a world where everything worked greatly because the good folks at projects like Debian or Perl have some of the greatest QA processes of all time, hands down. So, when someone introduces a thing like RVM which not only promotes having hundreds of different versions of the same software, both on development, testing and production environments, but also encourages poor quality looking back and looking forward, there isn’t anything else but to lose faith in humanity.

But enough for ranting.

I had to deliver this side project that works with the Twitter API and the only requirement pretty much was that it had to be both run from the shell but also loaded as a class within a Ruby script. And so I did everything locally with my great local installation of Ruby 1.8.7. When it came the time to load on the testing/production server I found myself on a situation where pathetic RVM was installed. After spending hours trying to accommodate my changes to run properly with Ruby 1.9.2, I set up a cron job using crontab to run my shit every now and then. And the shit wasn’t even running properly. Basically, my crontab line looked something like this:

*/30 * * * * cd /home/david/myproject && rake awesome_task

And that was failing, crontab was returning some crazy shit like “Ruby and/or RubyGems cannot find the rake gem”. Seriously? Then I thought, well, maybe my environment needs to be loaded and whatever, so I made a bash script with something like this:

#!/bin/bash
cd /home/david/myproject
/full/path/to/rvm/bin/rake -f Rakefile awesome_task

And that was still failing with the same error. So after trying to find out how cron jobs and crontab load Bash source files, I took a look at how Debian starts its shell upon login. And while that didn’t tell me that much that I didn’t know, I went to look at the system-wide /etc/profile and found a gem, a wonderful directory /etc/profile.d/ where a single shitty file was sitting, smiling back at me, like waiting for me to find it out and swear on all problems in life: rvm.sh. /etc/profile is not being loaded when I just run /bin/bash by my crappy script, only when I log in, I should’ve known this. Doesn’t RVM solve the issue of having system-wide installations so the user doesn’t have to deal with, you know, anything outside of his own /home ?

So I had to go ahead and do:

#!/bin/bash
source /etc/profile
cd /home/david/myproject
/full/path/to/rvm/bin/rake -f Rakefile awesome_task

And hours later I was able to continue with work. Maybe this will help some poor bastard like myself on similar situation on the future.

Of course one can argue that I could’ve installed my own RVM and its Ruby versions, but why, oh why, if it was, apparently, already there. Why would I have to fiddle with the Ruby installations if all I want is get my shit done and head to City Bakery where I can spend that money I just earned on chocolate cookies? My work is pretty simple to run with pretty much any ancient version of Ruby, nothing fancy (unless you call MongoMapper fancy). RVM is a great project that doesn’t solve an issue, but just hides some really fucked up shit on the Ruby community.

David Moreno: Geo::PostalCode::NoDB 0.01

13 August, 2016 - 15:19

Geo::PostalCode is a great Perl module. It lets you find surrounding postal areas (zip codes) around a given an amount of miles (radius), calculate distance between them, among other nice features. Sadly, I couldn’t get it to work with updated data and because the file its Berkely DB installer was producing was not being recognized by its parser, which bases off on DB_File. Since I was able to find working data for the source of zip codes, I ended up hacking the module and producing a version with no Berkeley DB support.

So basically, and taken from the POD:

RATIONALE BEHIND NO BERKELEY DB
On a busy day at work, I couldn’t get Geo::PostalCode to work with newer data (the data source TJMATHER points to is no longer available), so the tests shipped with his module pass, but trying to use real data no longer seems to work. DB_File marked the Geo::PostalCode::InstallDB output file as invalid type or format. If you don’t run into that issue by not wanting to use this module, please drop me a note! I would love to learn how other people made it work.

So, in order to get my shit done, I decided to create this module. Loading the whole data into memory from the class constructor has been proven to be enough for massive usage (citation needed) on a Dancer application where this module is instantiated only once.

$ sudo cpanm Geo::PostalCode::NoDB now!

David Moreno: I quit

13 August, 2016 - 15:00

I just recently quit my job at the startup company I had been working in for almost five years. In startup terms, such long time might be a whole lifetime, but in my case, I grew liking it more and more as the years came, I had evolved from being just another engineer, to lead a team of seven great developers, with decision-making tasks and strategy planning for our technical infrastructure. It’s been such a great long teaching journey that I’m nothing but pleased with my own performance, learned lessons and skills and all I provided and was provided by the project.

Leaving a city like New York is not an easy task. You have it all there, you start making a life and suddenly, before you know it, you already have a bunch of ties to the place, people, leases, important dates, all kinds of shit. Seriously, all kinds of crazy ass shit start to fill up your baggage. You wake up everyday to get into the subway and commute surrounded by all of this people that are just like you: so similar yet so immensely different. No, leaving the city is not an easy task, it’s not something to take lightly. You know how people just say “my cycle has ended in this place” as an euphemism not to end in bad terms with anyone? Well, ending a cycle is indeed a reality, I got to a point where I felt like I needed to head into a different direction, take on new challenges and overall, peace out and hope the best to everyone, specially to myself.

This was me, on my last day at work, last Friday of June:

(Some) people seem to be anxious to know what I’m doing next, and my answer is, go mind your own fucking business. However, life is short and I would love to do any of the following:

  • Go back to Brazil again, now as a blue belt in Brazilian jiujitsu, and train non-stop in Rio, this time as a local. I happened to come to Rio last November (as a four stripe white belt) and it’d been a great experience, with the Connection Rio guys. I kind of regretted not staying any longer, as a lot of people use to do, maybe three months. You don’t get to do anything else but train and roll with black belts on a daily basis, eat the healthy good stuff that a wonderful country like Brazil has to offer, hang out with amazing people and chill the fuck out all day long.

  • Make a road trip through Central America and get to know all of those countries where I’ve never been to even when I’ve travelled extensively around them for the last few years. I would head to the southernmost tip of Mexico and then take a bus to backpack travel in the cities all the way to Panama. Beer all along, a lot of swimming, plenty of heaven.

  • Head to any Russian consulate so I can get an entry visa for their amazing country and travel to any chess club on any of its big cities. Or maybe Hungary (do I need a visa to visit it?). Stay on small hostels where all I could use is a few good chess books and a chessboard, absorbe myself into chess sounds like a dream come true.

  • Stop procrastinating and write all the good Perl stuff I’ve wanting to do on my own time. All of those good projects I always thought of and only had opportunity to try at work but not on a giving-back-to-the-community kind of way.

Decisions, decisions…

For the time being, I’m chilling with my people, friends and family in beautiful Mexico City. I’ve been doing so for the entire month of July and I couldn’t be more content. August will see my 28th birthday and as I approach thirty, I believe I need to continue moving forward.

This stupid world is a tiny place and our lives are short, I for one, will definitively try to take the bull by the horns.

Thanks for reading, more updates soon. Peace.

David Moreno: Another day, another dawn

13 August, 2016 - 14:58

I started working for Booking.com two weeks ago now. These last two months have been probably the most chaotic and hectic in a long time from what I remember in my life. I left New York City, chilled out in Mexico City for several weeks and then finally relocated to beautiful Amsterdam. The decision of leaving New York was not simple, but I decided to take on new challenges, one cycle had ended for me and it was time for me to move on.

The change has not been easy, but it hasn’t been bad at all. I knew that coming from a very mutating startup in New York City into a company with presence in dozens of countries and several thousand employees was not gonna be a simple change. I decided to proceed with this because I want to learn. I want to learn how other businesses use technology, specially the kind that I’m most interested on, to become successful. I want to understand how they operate. I want to help a company succeed, take on new technology lessons and learn from more experienced peers. Thankfully, I’m not financially burdened and I’m able to make this decision myself without affecting (too much) others. I wanted the adventure (again) of relocating to a new country and making it to a company that has been successful for so many years, hugely based in Perl, who has contributed boldly to its community, with so many great Perl developers in the mix, etc., just made it all worth it.

Being a new guy is never easy, and I haven’t really been one of those often. I wish I knew how a lot of things work already, so I was able to contribute faster. At the end, this is all about a business and it’s about what developers and engineers do to contribute, to give back, to produce something that ends up as revenue and profit, and I want to do just that. Benefits and perks are always nice, but what I really want is prove to myself and to those who hired me the kind of competent talent that I am.

If you can understand Spanish, this video explains it better.

Oh and yeah, I turned 28 during my first week of work here in the Netherlands and was lucky enough to have a very happy one with plenty of hopes and goals.

Cheers.

Jonathan Dowland: Lush (and friends)

12 August, 2016 - 23:20

The gig poster

On July 31st a friend and I went to see Maxïmo Park and support at a mini-festival day in Times Square, Newcastle. The key attraction for me to this gig was the top support band, Lush who are back after a nearly 20 year hiatus.

Nano Kino 7"

I first heard of Lush quite recently from the excellent BBC Documentary Girl in a Band: Tales from the Rock 'n' Roll Front Line. They were excellent: the set was quite heavy on material from their more dreampop/shoegaze albums which is to my taste.

Maxïmo 7"s

I also particularly enjoyed Warm Digits, motorik instrumental dance stuff that reminded me of Lemon Jelly mixed with Soulwax, who had two releases very reasonably priced on the merch stand; Nano Kino in the adjacent "Other Rooms", also channelling dreampop/shoegaze; and finally Maxïmo Park themselves. I was there for Lush really but I still really enjoyed the headliners. I've seen them several times but I've lost track of what they've been up to in recent years. Both their earliest material and several newer songs were well received by the home crowd and atmosphere in the enclosed Times Square was excellent.

Markus Koschany: My Free Software Activities in July 2016

12 August, 2016 - 21:52

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian.

Debian Android Debian Games
  • This month GCC-6 bugs became release critical. I fixed and triaged those kind of bugs in games like supertransball2, berusky2, freeorion, bloboats, armagetronad and megaglest.
  • I packaged new upstream releases of scorched3d, bzflag, spring, springlobby, freeorion, freeciv and extremetuxracer.
  • Freeciv, one of the best strategy games ever by the way, also got a new binary package freeciv-client-gtk3. This package will eventually become the new default client to play the game in the future. You are welcome to test it.
  • I packaged a new upstream release of adonthell and adonthell-data. This game is built with Python 3 and SDL 2 now and also uses the latest version of flex to generate its sources. We will probably see only one other future upstream release of adonthell because the main developer has decided to move on after more than 15 years of development.
  • I fixed another RC bug in minetest, updated whichwayisup for this release cycle and moved the package to Git.
Debian Java Debian LTS

This was my sixth month as a paid contributor and I have been paid to work 14,7 hours on Debian LTS. In that time I did the following:

  • DLA-554-1. I spent most of the time this month on completing my work on libarchive. I issued DLA-554-1 and fixed 18 CVE plus another issue which was later assigned CVE-2016-6250.
  • DLA-555-1. Issued a security update for python-django fixing 1 CVE.
  • DLA-561-1. Issued a security update for uclibc fixing 3 CVE.
  • DLA-562-1. Issued a security update for gosa fixing 1 CVE. I could triage another open CVE as not-affected after confirming that the issue had already been fixed two years ago.
  • DLA-568-1. Issued a security update for wordpress fixing 6 CVE. I decided to go ahead with this update because I could not find any regressions. Unfortunately this wasn’t true for my intended fix for CVE-2015-8834. The database upgrade did not succeed hence I decided to postpone the fix for CVE-2015-8834 until we can narrow down the issue.
  • DLA-576-1. Issued a security update for libdbd-mysql-perl fixing 2 CVE.
  • From 04. July to 10. July I was in charge of our LTS frontdesk. I triaged CVEs in librsvg, bind9, trn, pdns and drupal7 and answered questions on the debian-lts mailing list.
Misc and QA
  • I fixed another GCC-6 bug in wbar, a light and fast launch bar.
  • Childsplay and gvrng were orphaned last month. I updated both of them, fixed the RC-bug in childsplay (non-free font) and moved the packages to the Debian QA Group.

Matthew Garrett: Microsoft's compromised Secure Boot implementation

12 August, 2016 - 04:58
There's been a bunch of coverage of this attack on Microsoft's Secure Boot implementation, a lot of which has been somewhat confused or misleading. Here's my understanding of the situation.

Windows RT devices were shipped without the ability to disable Secure Boot. Secure Boot is the root of trust for Microsoft's User Mode Code Integrity (UMCI) feature, which is what restricts Windows RT devices to running applications signed by Microsoft. This restriction is somewhat inconvenient for developers, so Microsoft added support in the bootloader to disable UMCI. If you were a member of the appropriate developer program, you could give your device's unique ID to Microsoft and receive a signed blob that disabled image validation. The bootloader would execute a (Microsoft-signed) utility that verified that the blob was appropriately signed and matched the device in question, and would then insert it into an EFI Boot Services variable[1]. On reboot, the boot loader reads the blob from that variable and integrates that policy, telling later stages to disable code integrity validation.

The problem here is that the signed blob includes the entire policy, and so any policy change requires an entirely new signed blob. The Windows 10 Anniversary Update added a new feature to the boot loader, allowing it to load supplementary policies. These must also be signed, but aren't tied to a device id - the idea is that they'll be ignored unless a device-specific policy has also been loaded. This way you can get a single device-specific signed blob that allows you to set an arbitrary policy later by using a combination of supplementary policies.

This is all fine in the Anniversary Edition. Unfortunately older versions of the boot loader will happily load a supplementary policy as if it were a full policy, ignoring the fact that it doesn't include a device ID. The loaded policy replaces the built-in policy, so in the absence of a base policy a supplementary policy as simple as "Enable this feature" will effectively remove all other restrictions.

Unfortunately for Microsoft, such a supplementary policy leaked. Installing it as a base policy on pre-Anniversary Edition boot loaders will then allow you to disable all integrity verification, including in the boot loader. Which means you can ask the boot loader to chain to any other executable, in turn allowing you to boot a compromised copy of any operating system you want (not just Windows).

This does require you to be able to install the policy, though. The PoC released includes a signed copy of SecureBootDebug.efi for ARM, which is sufficient to install the policy on ARM systems. There doesn't (yet) appear to be a public equivalent for x86, which means it's not (yet) practical for arbitrary attackers to subvert the Secure Boot process on x86. I've been doing my testing on a setup where I've manually installed the policy, which isn't practical in an automated way.

How can this be prevented? Installing the policy requires the ability to run code in the firmware environment, and by default the boot loader will only load signed images. The number of signed applications that will copy the policy to the Boot Services variable is presumably limited, so if the Windows boot loader supported blacklisting second-stage bootloaders Microsoft could simply blacklist all policy installers that permit installation of a supplementary policy as a primary policy. If that's not possible, they'll have to blacklist of the vulnerable boot loaders themselves. That would mean all pre-Anniversary Edition install media would stop working, including recovery and deployment images. That's, well, a problem. Things are much easier if the first case is true.

Thankfully, if you're not running Windows this doesn't have to be a issue. There are two commonly used Microsoft Secure Boot keys. The first is the one used to sign all third party code, including drivers in option ROMs and non-Windows operating systems. The second is used purely to sign Windows. If you delete the second from your system, Windows boot loaders (including all the vulnerable ones) will be rejected by your firmware, but non-Windows operating systems will still work fine.

From what we know so far, this isn't an absolute disaster. The ARM policy installer requires user intervention, so if the x86 one is similar it'd be difficult to use this as an automated attack vector[2]. If Microsoft are able to blacklist the policy installers without blacklisting the boot loader, it's also going to be minimally annoying. But if it's possible to install a policy without triggering any boot loader blacklists, this could end up being embarrassing.

Even outside the immediate harm, this is an interesting vulnerability. Presumably when the older boot loaders were written, Microsoft policy was that they would never sign policy files that didn't include a device ID. That policy changed when support for supplemental policies was added. without this policy change, the older boot loaders could still be considered secure. Adding new features can break old assumptions, and your design needs to take that into account.

[1] EFI variables come in two main forms - those accessible at runtime (Runtime Services variables) and those only accessible in the early boot environment (Boot Services variables). Boot Services variables can only be accessed before ExitBootServices() is called, and in Secure Boot environments all code executing before this point is (theoretically) signed. This means that Boot Services variables are nominally tamper-resistant.

[2] Shim has explicit support for allowing a physically present machine owner to disable signature validation - this is basically equivalent

comments

Christoph Egger: Looking for a replacement Homeserver

11 August, 2016 - 19:15

Almost exactly six years ago I bought one of these Fuloong 6064 mini PCs. The machine has been working great ever since both collecting my mail and acting as an IMAP server as well as providing public services -- it's also keyserver.siccegge.de. However jessie is supposed to be the last Debian release supporting the hardware and the system's rather slow and lacks memory. This is especially noticeable with IMAP spam filter training and mail indexing. Therefore I'm looking for some nice replacement -- preferably non-x86 again (no technical reasons). My requirements are pretty simple:

  • Works with vanilla stretch (and stretch kernel)
  • Still works with Debian stable six years from now
  • Faster (single-core performance, 2-4 cores would be nice as well), currently it's a 900MHz super-scalar, out-of-order MIPS64 CPU
  • Consumes less power
  • SATA port
  • Preferably fanless
  • Maximum same price range, around 200 EUR including case and shipping

Now I'd consider one of these ARM boards and get it a nice case but they seem all to either fail in terms of SATA or not being faster at all (and one needs to go for outdated hardware to stand a chance of mainline kernel support). If anyone knows something nice and non-x86 I'll happily take suggestions.

Petter Reinholdtsen: Coz can help you find bottlenecks in multi-threaded software - nice free software

11 August, 2016 - 17:00

This summer, I read a great article "coz: This Is the Profiler You're Looking For" in USENIX ;login: about how to profile multi-threaded programs. It presented a system for profiling software by running experiences in the running program, testing how run time performance is affected by "speeding up" parts of the code to various degrees compared to a normal run. It does this by slowing down parallel threads while the "faster up" code is running and measure how this affect processing time. The processing time is measured using probes inserted into the code, either using progress counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It can also measure unmodified code by measuring complete the program runtime and running the program several times instead.

The project and presentation was so inspiring that I would like to get the system into Debian. I created a WNPP request for it and contacted upstream to try to make the system ready for Debian by sending patches. The build process need to be changed a bit to avoid running 'git clone' to get dependencies, and to include the JavaScript web page used to visualize the collected profiling information included in the source package. But I expect that should work out fairly soon.

The way the system work is fairly simple. To run an coz experiment on a binary with debug symbols available, start the program like this:

coz run --- program-to-run

This will create a text file profile.coz with the instrumentation information. To show what part of the code affect the performance most, use a web browser and either point it to http://plasma-umass.github.io/coz/ or use the copy from git (in the gh-pages branch). Check out this web site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the profiling more useful you include <coz.h> and insert the COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the code, rebuild and run the profiler. This allow coz to do more targeted experiments.

A video published by ACM presenting the Coz profiler is available from Youtube. There is also a paper from the 25th Symposium on Operating Systems Principles available titled Coz: finding code that counts with causal profiling.

The source code for Coz is available from github. It will only build with clang because it uses a C++ feature missing in GCC, but I've submitted a patch to solve it and hope it will be included in the upstream source soon.

Please get in touch if you, like me, would like to see this piece of software in Debian. I would very much like some help with the packaging effort, as I lack the in depth knowledge on how to package C++ libraries.

Tom Marble: webica

11 August, 2016 - 02:54
webica

I've just pushed the first version of my new Clojure wrapper for Selenium called webica.

The reason I need webica is that I want to do automated browser testing for ClojureScript based web applications. Certainly NodeJS, PhantomJS, Nashorn and the like are useful... but these can't quite emulate the full browser experience. We want to test our ClojureScript web apps in browsers -- ideally via our favorite automated continuous integration tools.




My new approach with the webica library is to do full Java introspection in the spirit that amazonica does for the AWS API. In fact I wanted to take it a step further by actually generating Clojure source code via introspection that can be used by Codox to generate nice API docs (which you don't get with amazonica). That, alas, was a little trickier than expected due to pesky Quine-like problems .

If you load the library on the REPL you can get a feeling for each namespace by calling the show-functions function.

I realize this approach of aggressive introspection, playing fast and loose with types and application level dynamic dispatch are crazy antipatterns. In my defense I started out playing around to see "if I could do it". After seeing the result in the form of a shell script in Clojure -- imitating lmgtfy -- perhaps webica will actually be useful!

I plan to talk about webica tonight at clojure.mn -- hope to see you there!

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้